[13:20:34] <umquant> What is the "mongo" way of doing a join? For example I have a document with 50 ints. Each int is representative of a memory address. In another document I define that memory address N = sensor Y. How would I do a query that wouuld link those?
[13:22:31] <Nodex> normally you would embedd those memory addresses and save the second query
[13:22:40] <bearclaw> an aggregate using an unwind on the ints, to have one document per int, and then a group?
[13:25:49] <umquant> Nodex, can you expand on that?
[13:26:05] <umquant> bearclaw, I will have to do some research. I am not familiar enough with mongo to follow your idea.
[13:27:27] <umquant> Here are my schemas: https://gist.github.com/anonymous/68df1d5a07f9e26a48bb
[13:30:45] <ghibe> hi iam using the mongodb 2.4.9 from the official ubuntu repos so i don't know if its already fixed, but in the config file /etc/mogodb.conf there's this line # Disable the HTTP interface (Defaults to localhost:27018), while it actually seems to be 28017
[13:34:06] <rasputnik> ghibe: yes that's a typo, it's usually 1000+ mongod port
[13:37:06] <ghibe> i was quite sure about it, iam asking because i don't know if post a bug report to mongodb or to the one that mantains the ubuntu repo.
[13:45:29] <rspijker> umquant: you can’t do an aggregate over multiple collections. So that won;t help you… You should go with Nodex’s suggestion and change your schema
[13:46:58] <umquant> rspijker, even if memory addresses are assumed by array index? Like values[0] = memory address 0
[13:48:14] <rspijker> umquant: not sure if I follow… How would that still be a problem if you embed?
[13:58:31] <Derick> dragoonis: it's better to just ask your question - more change of people answering it
[13:59:19] <dragoonis> Renaming collection: results_staging.company_category_score_months to: results_staging.company_category_score_months_14_07_15_14_34_31
[13:59:19] <dragoonis> string(137) "Exception thrown: Failed to connect to: server.com:27017: Read timed out after reading 0 bytes, waited for 0.000000 seconds"
[14:00:08] <dragoonis> pecl extension version is: 1.5.4 ... getting this exception after
[14:11:09] <dragoonis> Derick, this is as useful as I could get it using dtruss - https://gist.githubusercontent.com/dragoonis/bfe86bfd448ae9c60696/raw/54e1dfde724b91938a48b7c76cd8e95c2d141249/gistfile1.txt
[14:11:32] <dragoonis> Derick, It successfully renames two collections beforehand, and on the third rename it bails
[14:25:21] <stefuNz> hi. i have some slow operations, but i don't want them in my logfiles, because this just increases the IO. how can i turn that off?
[15:11:03] <mango_> anyone know why either M102 or M202 haven't started yet?
[15:24:01] <juancarlosfarah> mango: Both courses start today at 17:00 UTC and it is currently 15:21 UTC.
[15:24:44] <mango_> juancarlosfarah: yes, I'm in GMT
[15:25:15] <mango_> took a day off work today to get started.
[15:25:40] <mango_> juancarlosfarah: do you know when new material gets published?
[15:27:02] <juancarlosfarah> mango: They should be up in an hour and half. Usually new material gets published a week after the past weeks material is published. So in this case it would be every Tuesday at 17:00 UTC (18:00 GMT)
[15:28:16] <juancarlosfarah> mango_: Basically when the current week's homework is due, the following week's material is published.
[15:32:19] <mango_> juancarlosfarah: ok thanks, I just need to figure when it's best to take a day off in UK
[15:32:46] <mango_> today was probably not an ideal day, I'll make it on Wednesdays from next week.
[15:46:36] <juancarlosfarah> mango_: Yes, probably Wednesdays would be best.
[16:32:10] <movedx> I'm trying to enable sharding on a collection. I've defined a hashed index, but when I attempt to activate sharding, MongoDB complains that there is a missing value in the key for a certain document, but the key in said document isn't empty.
[16:32:49] <movedx> Is there a general reason for this, or is it very specific?
[16:34:49] <movedx> Hmm, the documents it doesn't like do have two blank fields, but they're not part of the hashed index?
[17:27:34] <oblio_> so i'm trying to write a shell script which grabs some output from a query and then uses it - but i seem to be having an issue. i see that im supposed to wrap the command in printjson() but that doesn't give me results, it prints out things pertaining to the execution of the query.
[18:25:43] <daidoji> its a python ORM type thing written on top of pymongo for integration with Django ORM features I think
[18:26:26] <daidoji> anyways, they suggest some weird voodoo that has been incorporated into this project, but I'm having trouble overriding it when trying to write tests
[18:34:31] <daidoji> oops, others have been here before me, http://stackoverflow.com/questions/4774800/mongoengine-connect-in-settings-py-testing-problem
[18:50:24] <Yahkob> Anyone around that uses mongoose? I guess this could be answered by someone who hasnt used mongoose really but having trouble adding to a sub document
[19:08:21] <arinel> anyone with experience with Go and mgo? I can't make session.UpsertUser() work. I can only login with mgo if I create a user via the mongo shell.
[19:25:20] <arinel> found the solution: update to the latest mgo code from their bzr repository
[19:31:49] <user55> hello! i would like to ask a question:
[19:32:09] <user55> when my PHP script calls MongoCollection::insert or MongoCollection::save with an undefined collection as a parameter
[19:32:17] <user55> mongo creates it automatically
[19:32:26] <user55> is it possible to disable this?
[19:33:36] <arinel> user55: not that I know of. You should be able to get an array of all available collections, then check if your collection is in this array prior to insert-ing or save()-ing
[19:34:22] <arinel> user55: or, you know, chek if the collection is "undefined" and don't call MongoCollection::insert()
[20:50:15] <blizzow> I'm trying to bring up a hidden member of a replica set with a very large collection in it. The hidden member is 2.6.3 and the rest of the cluster is 2.4.x . The new hidden member got about 800GB into syncing what looks like about 1.7TB of data. Then it died saying the following:
[20:50:32] <blizzow> WARNING: the collection 'mycollection.mycollection_20140712' lacks a unique index on _id. This index is needed for replication to function properly
[20:50:32] <blizzow> 2014-07-15T20:41:07.751+0000 [initandlisten] To fix this, you need to create a unique index on _id. See http://dochub.mongodb.org/core/build-replica-set-indexes
[20:51:30] <blizzow> I look at the indexes from that collection and it does NOT have a unique index on _id. But neither do any of the other collections. I'm worried that enabling a unique index on that collection will cause my cluster to grind to a halt.
[20:51:43] <blizzow> Anyone know how/what I can do to get the new 2.6.3 node fired up?