[00:01:33] <aodhol> thanks testing22 i'm not sure i asked the question very clearly
[00:02:51] <aodhol> essentially i have two objects lets say, and they are both the same except they have different values for a particular array node e.g. "obj1": {"someNode":"someValue", "someList":[1]}
[00:03:14] <aodhol> Then I wish to insert: "obj2": {"someNode":"someValue", "someList":[2]}
[00:03:53] <aodhol> but what i'd like is for obj1 and obj2 to be merged such that it would end up being: {"someNode":"someValue", "someList":[1,2]}
[00:04:30] <aodhol> there's an compound index on "someNode" and on "someList"
[00:05:33] <joannac> So you want an update instead of an insert?
[00:05:39] <testing22> do you need to insert obj2 before you combine obj1 and obj2? or could you essentially add '2' to someList of obj1 in that step?
[00:06:25] <aodhol> testing22: i suppose i could run through my data before persisting it and merge manually, but i thought that inserting / updating in Mongo would do the trick
[00:07:07] <joannac> aodhol: try update with the upsert option
[00:16:16] <SamHad> testing22 it doesn't show it running
[00:16:42] <joannac> SamHad: check that log file and see why it stopped?
[00:18:02] <joannac> aodhol: erm, so every time you add a date, you want that date to be added to the object in the date document?
[00:18:45] <aodhol> joannac: everytime I add an object that contains the same details excepting the date, i'd like that date to be added to the array of the existing object
[04:09:24] <rmsy> Is it possible to do a sort by an element match?
[04:09:38] <rmsy> Well, that's not exactly what I want
[04:10:08] <rmsy> I have a field that is an array of hashes, and I want to sort by the number of hashes in the array that have a specified value for a specified key
[05:15:05] <testing22> how can i design this aggregation/reporting schema to be multitimezone friendly? http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/
[09:40:02] <Zelest> is it possible to have a sparse unique index with a date field as the condition?
[09:40:13] <Zelest> e.g, only index things younger than 2 hours
[10:49:36] <Aartsie> Derick: I was wondering if the PHP MongoDB driver a array with Key => value will save it as a object ?
[10:49:47] <oxsav_> hello.. I'm trying to use the simple api rest of mongodb. I'm trying to do a simple GET to the server.. I do it with success but returns me this: Resource interpreted as Script but transferred with MIME type text/plain
[11:02:38] <retran> it's not a mongo error of any sort
[12:25:39] <_boot> Hi, is there any way to 'create' a Date object in an aggregation pipeline? I need to snap the dates to year and month but I'd like to be able to keep them as actual dates
[12:27:42] <cheeser> like this? http://www.kamsky.org/1/post/2013/11/yet-another-way-to-group-by-dates-in-mongodb.html
[12:28:17] <cheeser> she has a series of posts on aggregations and time
[12:41:03] <dccc> is it possible to create a unique compound index with a boolean field? i am trying to restrict only one copy that has field1 and is_active.
[12:46:06] <cheeser> shouldn't be too hard to test.
[12:49:04] <dccc> i have index( { field1: 1, is_active: 1 }, { unique: true } ) but it i don't get errors when saving another copy with the same field1 reference and is_active: true
[13:13:14] <dccc> ugh, missed brackets on the first save in coffeescript.
[13:13:32] <hyperboreean> I'm getting a "cursor didn't exist on server, possible restart or timeout?" error when I do a collection.find. I tried all kind of things, lower the batch_size, no batch_size, increasing it
[13:28:52] <hyperboreean> cheeser: have no idea ... I'm going to add some timing around that, though really ? it's just a fd.write(id), it shouldn't take that long, right ?
[13:43:18] <redondo> I'm trying to install mongodb in debian wheezy as in http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/. And the process hangs on getting the public key (step 1).
[13:43:22] <cheeser> hyperboreean: not sure what to tell you at this point.
[13:44:09] <hyperboreean> cheeser: is there any other way of dumping data from mongo in batches ?
[15:34:28] <Thijsc> Hi all. I'm trying to help a client recover a collection that was accidentally emptied. It then turned out that there were actually no backups being made. Very stupid, I know.
[15:34:51] <Thijsc> Is there any know way to get deleted records back from a collection based on the data files and the journal?
[15:35:24] <Nodex> can't you just dump them into another database?
[15:37:20] <Thijsc> Just asking on the off chance there's some weird hack that could help me
[15:37:48] <cheeser> i'm not discounting that but i wouldn't count on it.
[15:49:55] <DoubleAW> aloha from the mongodb conference in chicago
[15:50:43] <cheeser> aloha from the about to start Geneva MUG :)
[15:51:13] <Tomasso> I try to create a text index, and the largest key i access is 51 characters long.. but I get "ns name too long, max size is 128"
[15:51:54] <cheeser> i think it's complaining about you index name.
[15:51:59] <cheeser> what's the command you're giving?
[15:52:54] <ichilton> By default, will changing a single mongodb server to a master & slave slow down the master? Will it wait for slaves to finish writing before returning, or is that off by default?
[15:53:30] <cheeser> you're using master/slave and not replica sets?
[15:54:12] <Tomasso> this is my command: db.mycollection.com.ar.stores_v1.ensureIndex({'store.info.categories.category':'text','store.info.categories.products.product. name':'text','store.info.categories.products.product.description':'text', 'store.info.companyname':'text'})
[15:54:12] <cheeser> why? m/s has been ... discouraged in favor of replica sets.
[15:55:29] <ichilton> Am going to go replica sets eventually, but just want to add a slave as a backup at the moment.
[15:55:37] <cheeser> after all your index stuff: , { name : 'myindex' }
[15:55:47] <ichilton> cheeser: just wanted to add a very quick m/s for now
[15:56:14] <cheeser> ichilton: i dunno about m/s but with replSets, it would not necessarily wait fo writes to the secondaries depending on your write concern
[15:56:20] <Tomasso> ooh.. you mean a name to the index ?
[16:14:00] <oxsav_> Giving me a strange error.. It seems that the request arrives to server and the browser receives the response but it's not converted to a right type
[17:45:43] <Tomasso> is there some way to query a text index using find, and not the runCommand ? I need to run the query from ruby code, and runCommand is not available as a method in the ruby driver
[18:11:16] <chaotic_good> does mongorestore overwrite existing files ina mongo?
[18:26:48] <justanotherday> Does Mongodb support JOIN? linking between JSON documents please
[18:55:39] <TheComrade> Has anyone done anything like write a script that turns some sort of mongo query into command line parameters? So you can do something like: "cmd $(mongoquery ...)" that gets expanded? Or is this a weird approach because the query would be ungainly to use in this fashion?
[18:58:53] <testing22> how can i design this aggregation/reporting schema to be multitimezone friendly? http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/ i have two thoughts: hourly aggregation only, this would cause way too much segmentation of data. or aggregating in to different timezone collections resulting in redundant data, lots and lots of it.
[19:01:09] <ron> justanotherday: please keep it in the channel.
[19:31:22] <testing22> justanotherday: embedded documents are not like joining two documents together as they are embedded and there exists no virtual relationship to another document. sql-like joins are done with database references (DBRef) but there are limitations using this. instead of thinking in traditional rdbms rows and columns, think of a "schemaless" document such as JSON and being able to store/index/query these documents. a good example is inste
[19:31:22] <testing22> of having a (o2m) user and addresses table, let's just embed addresses in to each user
[19:32:49] <Astraea5> Hi. I've never used to Aggregate function, and I was wondering if someone could help me with a query.
[19:33:41] <Astraea5> I haven't got a pipeline yet. I'm still not sure I'm taking the right approach.
[19:34:31] <Astraea5> What I've got is a collection of documents, each with a category field. I want to count each category (got that working with $group) but also return the first 3 documents in each category.
[19:35:05] <Astraea5> I'm not sure if I can do that with the same query.
[19:39:58] <TheComrade> simple question: What is the common way for storing auth information on a client? I want to do like: mongo query.js without having to pass auth information each time.
[19:42:58] <Goopyo> can you do multiple pushes in one update?
[19:43:53] <testing22> on multiple documents or more than one push to a specific field?
[19:44:44] <Goopyo> I want to push to two arrays in a doucment simulatniously
[19:47:32] <Goopyo> Do I pass a list of ops instead of a document? http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update
[20:14:16] <jaytaph> inside a cursor i'm doing a forEach(), how do i get the first item from an array inside each document? e.myarray.0.id doesn't not even compile right?
[20:41:57] <jaytaph> x0f: http://pastebin.com/iYwvyiRW So, basically i need to fetch all the ancestors.0.id values, and update this into document.top_document_id
[20:43:19] <jaytaph> i'm trying this with document.find().foreach() which i used before, but never tried to fetch ancestors.0.id so i'm figuring out how to retrieve this.. or maybe i must update all those documents (1.5M of them) another way :/
[20:49:43] <jaytaph> x0f: might have found something: element = e.ancestors.pop(); print(element.id); that works..
[20:50:36] <x0f> jaytaph, you are sure you can't just use the bracket accessor? array[index] to get the first element?
[20:55:15] <jaytaph> actually.. pop() doesn't work either.. it's destructive on the ancestors (and obviously, i'm popping the last item, not the first)
[20:59:45] <joshua> Can someone tell me what format this json date is in from the mongoexport dump "$date" : 1384766745205
[20:59:59] <joshua> I want to find a specific date but not sure how to translate it
[21:00:18] <joshua> Something is messed up somewhere (maybe) heh
[21:05:05] <DoubleAW> note that if you do new Date().getTime() in JS it will be in milliseconds
[21:05:13] <DoubleAW> so that is most likely what is happening
[21:05:22] <jaytaph> x0f: it seems to be working now with [0]. Main issue that the FIRST (and only) document, didn't had any ancestors in them :(
[21:05:34] <jaytaph> so thanks for the help, it's working now!
[22:16:45] <kfox1111> map-reduce question. If you incremental map reduce into an existing collection, does it process every item in the resulting collection, or only if there is more then one item per key?
[23:13:53] <kfox1111> map-reduce question. If you incremental map reduce into an existing collection, does it process every item in the resulting collection, or only if there is more then one item per key?
[23:52:16] <kfox1111> what happens if a reduce ends up being greater then 16mb? Does the whole job fail right away? Does it continue and then fail? Does it silently fail?