PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 19th of November, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:33] <aodhol> thanks testing22 i'm not sure i asked the question very clearly
[00:02:51] <aodhol> essentially i have two objects lets say, and they are both the same except they have different values for a particular array node e.g. "obj1": {"someNode":"someValue", "someList":[1]}
[00:03:14] <aodhol> Then I wish to insert: "obj2": {"someNode":"someValue", "someList":[2]}
[00:03:53] <aodhol> but what i'd like is for obj1 and obj2 to be merged such that it would end up being: {"someNode":"someValue", "someList":[1,2]}
[00:04:30] <aodhol> there's an compound index on "someNode" and on "someList"
[00:04:56] <aodhol> (to enforce uniqueness / remove dupicates)
[00:05:33] <joannac> So you want an update instead of an insert?
[00:05:39] <testing22> do you need to insert obj2 before you combine obj1 and obj2? or could you essentially add '2' to someList of obj1 in that step?
[00:06:25] <aodhol> testing22: i suppose i could run through my data before persisting it and merge manually, but i thought that inserting / updating in Mongo would do the trick
[00:07:07] <joannac> aodhol: try update with the upsert option
[00:07:14] <aodhol> i did
[00:07:17] <testing22> yep, update, push, upsert
[00:08:20] <joannac> aodhol: then it's still not clear to me what you're trying to do.
[00:08:22] <aodhol> joannac: mycollection.update(arrayOfdataToPersist, upsert:true, function() { ..
[00:08:37] <aodhol> hang on i'll get a real example
[00:08:57] <joannac> yes please. pastebin it
[00:11:36] <SamHad> I have a couple quick questions
[00:11:51] <SamHad> first is there a difference between mongo and mongod
[00:12:08] <testing22> mongo is a client, mongod is the daemon
[00:12:24] <testing22> daemon / database server
[00:12:40] <SamHad> is there a reason when i type mongod nothing happens
[00:13:05] <aodhol> joannac / testing22 http://pastebin.com/VYRpf4y9
[00:13:08] <SamHad> when i type mongod i get this message all output going to: /usr/local/var/log/mongodb/mongo.log
[00:13:44] <testing22> mongod default releases to a background process with output redirected to that file
[00:13:46] <aodhol> joannac: testing22 note the dates array, there may be multiple objects with all details the same except for different dates
[00:14:07] <testing22> SamHad: check ps and you'll see it running
[00:15:40] <aodhol> joannac: testing22 so i'd like the different dates to be added to the array of the obect already existing in the DB
[00:15:45] <aodhol> if that makes sense
[00:16:16] <SamHad> testing22 it doesn't show it running
[00:16:42] <joannac> SamHad: check that log file and see why it stopped?
[00:18:02] <joannac> aodhol: erm, so every time you add a date, you want that date to be added to the object in the date document?
[00:18:45] <aodhol> joannac: everytime I add an object that contains the same details excepting the date, i'd like that date to be added to the array of the existing object
[00:18:51] <aodhol> document i mean
[00:19:22] <testing22> update, push, upsert = true should do the trick then
[00:19:37] <SamHad> its open now
[00:20:17] <SamHad> when i try to open the db it tells me SyntaxError: Unexpected town ILLEGAL
[00:20:45] <aodhol> hmm, i'm not sure where to begin with that testing22
[00:23:26] <joannac> aodhol: why doesn't an update with upsert work for you?
[00:23:42] <aodhol> dunno, but when i check the array it only has one entry
[00:23:47] <aodhol> could it be my indexes?
[00:23:55] <joannac> aodhol: pastebin your update command
[00:24:23] <testing22> aodhol: something like this: http://pastebin.com/5UGWcLSC
[00:26:36] <testing22> which works: http://pastebin.com/PSaWwiTF
[00:28:17] <aodhol> joannac: http://pastebin.com/KUjWV8Ln
[00:28:21] <aodhol> testing22: ahh I see!
[00:28:27] <aodhol> let me give it a try :)
[00:40:28] <SamHad> why do I keep getting this SyntaxError: Unexpected town ILLEGAL error
[00:48:54] <testing22> are you typing something in to get that error?
[00:48:58] <testing22> SamHad
[02:36:50] <aodhol> By Jay that only worked!
[02:36:59] <aodhol> Thanks testing22 and joannac !!! :)
[02:37:04] <aodhol> Cheers lads!
[02:37:08] <testing22> you're welcome
[02:37:34] <aodhol> night :)
[04:09:24] <rmsy> Is it possible to do a sort by an element match?
[04:09:38] <rmsy> Well, that's not exactly what I want
[04:10:08] <rmsy> I have a field that is an array of hashes, and I want to sort by the number of hashes in the array that have a specified value for a specified key
[04:10:18] <rmsy> Is that possible?
[04:11:08] <jkitchen> rmsy: it certainly is. but it might not be within mongodb's query language
[04:12:22] <jkitchen> just doing a quick search and it looks like no, not natively within mongo
[04:12:43] <jkitchen> you're basically defining a custom sort function at that point, and it doesn't look like mongodb supports that
[04:14:50] <rmsy> jkitchen: would be cool if it could somehow have lambda sorting
[04:14:53] <rmsy> thanks for looking into it
[04:15:39] <jkitchen> it may have, but I don't think so, at least not from my quick search
[04:15:49] <jkitchen> I am by no means an expert
[05:15:05] <testing22> how can i design this aggregation/reporting schema to be multitimezone friendly? http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/
[09:40:02] <Zelest> is it possible to have a sparse unique index with a date field as the condition?
[09:40:13] <Zelest> e.g, only index things younger than 2 hours
[10:14:49] <Aartsie> Derick: Hi, how are you ?
[10:41:17] <Derick> Aartsie: good, yourself? :-)
[10:42:58] <Aartsie> Derick: I'm fine thank you :)
[10:45:24] <Derick> Aartsie: having a very flakey connection here - but can I help with something?
[10:46:15] <oxsav> hello
[10:46:59] <Derick> hi
[10:49:36] <Aartsie> Derick: I was wondering if the PHP MongoDB driver a array with Key => value will save it as a object ?
[10:49:47] <oxsav_> hello.. I'm trying to use the simple api rest of mongodb. I'm trying to do a simple GET to the server.. I do it with success but returns me this: Resource interpreted as Script but transferred with MIME type text/plain
[10:49:51] <oxsav_> any idea?
[10:50:18] <oxsav_> using Javascript to do GET to my server
[10:50:26] <retran> what's the simple API rest of mongodb
[10:50:53] <retran> it doesn't come with one natively
[10:51:03] <retran> you're not confusing mongo with couch are you?
[10:52:04] <oxsav_> I just have my service of mongo running with: mongodb --rest --jsonp
[10:52:16] <retran> oh
[10:52:20] <retran> you want to use that weird thing
[10:52:25] <retran> that is ReadOnly
[10:52:37] <oxsav_> yes
[10:52:38] <oxsav_> http://docs.mongodb.org/ecosystem/tools/http-interfaces/
[10:52:46] <retran> why
[10:53:19] <oxsav_> because I only need to get some information from my application
[10:53:39] <retran> you need to use mime type JSON
[10:53:44] <oxsav_> do not need to change or add any data to it
[10:54:10] <oxsav_> yeh but where do I define that?
[10:54:20] <retran> are you new to using REST interfaces
[10:54:29] <retran> its nothing to do with Mongo where you define it
[10:54:33] <retran> it will be your HTTP client
[10:54:55] <oxsav_> yes it's the first time I'm trying to use a REst API xD
[10:55:03] <oxsav_> sorry not mentioned that :p
[10:55:05] <retran> what HTTP client are you using
[10:55:21] <retran> are you trying to go there with your web browser
[10:56:37] <retran> i dunno
[10:56:45] <retran> i woudlnt try using that rest interface if i were you
[10:56:51] <retran> the documentation is not consistent
[10:57:01] <retran> and it even says you can crash the DB under cirumstances
[10:57:05] <retran> weird stuff bro
[10:57:20] <oxsav_> I have an application only html and javascript that have a button that only executes something like $.getJSON
[10:57:37] <oxsav_> http://api.jquery.com/jQuery.getJSON/
[10:58:18] <retran> jQuery is your http client
[10:58:22] <retran> (sorta)
[11:00:28] <oxsav_> yep
[11:02:17] <retran> Resource interpreted as Script but transferred with MIME type text/plain
[11:02:21] <retran> that's a jQuery error
[11:02:38] <retran> it's not a mongo error of any sort
[12:25:39] <_boot> Hi, is there any way to 'create' a Date object in an aggregation pipeline? I need to snap the dates to year and month but I'd like to be able to keep them as actual dates
[12:27:42] <cheeser> like this? http://www.kamsky.org/1/post/2013/11/yet-another-way-to-group-by-dates-in-mongodb.html
[12:28:17] <cheeser> she has a series of posts on aggregations and time
[12:28:26] <_boot> sounds good, thanks
[12:29:03] <cheeser> np
[12:41:03] <dccc> is it possible to create a unique compound index with a boolean field? i am trying to restrict only one copy that has field1 and is_active.
[12:46:06] <cheeser> shouldn't be too hard to test.
[12:49:04] <dccc> i have index( { field1: 1, is_active: 1 }, { unique: true } ) but it i don't get errors when saving another copy with the same field1 reference and is_active: true
[12:54:16] <cheeser> wait. what type is field1?
[12:54:22] <cheeser> http://docs.mongodb.org/manual/tutorial/create-a-unique-index/
[12:54:28] <cheeser> because that should work just fine
[12:55:51] <dccc> it's an ObjectId
[13:13:14] <dccc> ugh, missed brackets on the first save in coffeescript.
[13:13:32] <hyperboreean> I'm getting a "cursor didn't exist on server, possible restart or timeout?" error when I do a collection.find. I tried all kind of things, lower the batch_size, no batch_size, increasing it
[13:13:37] <hyperboreean> any other advices ?
[13:16:27] <cheeser> dccc: better?
[13:16:39] <cheeser> hyperboreean: are you taking a long time to tierate?
[13:16:41] <cheeser> iterate
[13:17:42] <joannac> I think cursors time out after 10 mins?
[13:18:41] <cheeser> sounds right
[13:19:15] <cheeser> and that's probably too long but not my call
[13:23:04] <hyperboreean> cheeser: I'm dumping a field from each document in a file, that's all
[13:27:40] <cheeser> and how long is that taking?
[13:28:52] <hyperboreean> cheeser: have no idea ... I'm going to add some timing around that, though really ? it's just a fd.write(id), it shouldn't take that long, right ?
[13:30:39] <cheeser> it shouldn't, no.
[13:33:34] <hyperboreean> the thing is that it always gets stuck after the same number of documents ~460k out of 1.9M
[13:34:57] <cheeser> strange
[13:35:21] <dccc> cheeser: yep, it works. thanks.
[13:37:48] <hyperboreean> yeah, it has nothing to do with the writes ... the average for the first 100k was below centiseconds
[13:43:02] <cheeser> dccc: great
[13:43:18] <redondo> I'm trying to install mongodb in debian wheezy as in http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/. And the process hangs on getting the public key (step 1).
[13:43:22] <cheeser> hyperboreean: not sure what to tell you at this point.
[13:44:09] <hyperboreean> cheeser: is there any other way of dumping data from mongo in batches ?
[13:44:20] <hyperboreean> that you know of
[13:44:31] <cheeser> mongodump, for one
[13:46:06] <hyperboreean> interesting ... it has a --query optio
[13:46:35] <redondo> ready... I downloaded it "manually". Thanks any way.
[13:50:57] <scrdhrt> How do I remove an array? Do I have to empty it first?
[13:54:49] <kali> scrdhrt: like any other field, $unset
[13:55:05] <scrdhrt> Thanks :)
[15:34:28] <Thijsc> Hi all. I'm trying to help a client recover a collection that was accidentally emptied. It then turned out that there were actually no backups being made. Very stupid, I know.
[15:34:51] <Thijsc> Is there any know way to get deleted records back from a collection based on the data files and the journal?
[15:35:24] <Nodex> can't you just dump them into another database?
[15:35:29] <Nodex> + then export them
[15:35:40] <Thijsc> Dump in what sense?
[15:35:49] <Nodex> restore
[15:36:26] <Thijsc> I just have the data files, I tried to export them with mongodump but that resulted in a collection in the same state
[15:36:45] <Thijsc> Which is correct behavior of course
[15:36:47] <cheeser> the journal would only go back so far.
[15:36:58] <cheeser> if you dont' have a back up, you're probably hosed at this point.
[15:37:07] <Thijsc> That's what I'm assuming
[15:37:20] <Thijsc> Just asking on the off chance there's some weird hack that could help me
[15:37:48] <cheeser> i'm not discounting that but i wouldn't count on it.
[15:49:55] <DoubleAW> aloha from the mongodb conference in chicago
[15:50:43] <cheeser> aloha from the about to start Geneva MUG :)
[15:51:13] <Tomasso> I try to create a text index, and the largest key i access is 51 characters long.. but I get "ns name too long, max size is 128"
[15:51:54] <cheeser> i think it's complaining about you index name.
[15:51:59] <cheeser> what's the command you're giving?
[15:52:54] <ichilton> By default, will changing a single mongodb server to a master & slave slow down the master? Will it wait for slaves to finish writing before returning, or is that off by default?
[15:53:30] <cheeser> you're using master/slave and not replica sets?
[15:53:44] <ichilton> Yep
[15:54:12] <Tomasso> this is my command: db.mycollection.com.ar.stores_v1.ensureIndex({'store.info.categories.category':'text','store.info.categories.products.product. name':'text','store.info.categories.products.product.description':'text', 'store.info.companyname':'text'})
[15:54:12] <cheeser> why? m/s has been ... discouraged in favor of replica sets.
[15:55:18] <cheeser> Tomasso: give it a name
[15:55:29] <ichilton> Am going to go replica sets eventually, but just want to add a slave as a backup at the moment.
[15:55:37] <cheeser> after all your index stuff: , { name : 'myindex' }
[15:55:47] <ichilton> cheeser: just wanted to add a very quick m/s for now
[15:56:14] <cheeser> ichilton: i dunno about m/s but with replSets, it would not necessarily wait fo writes to the secondaries depending on your write concern
[15:56:20] <Tomasso> ooh.. you mean a name to the index ?
[15:56:34] <cheeser> yes
[15:56:45] <ichilton> cheeser: cheeser ok, thanks
[15:56:46] <cheeser> by default the name is all the keys munged together.
[15:57:02] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/
[16:11:55] <oxsav_> hey
[16:12:41] <oxsav_> I don't know If I already answered this
[16:13:08] <oxsav_> but I'm trying to use the simple rest API of mongodb.. trying only to do GET operations
[16:13:16] <oxsav_> and I have a simple example
[16:13:19] <oxsav_> http://jsfiddle.net/W54qR/2/
[16:14:00] <oxsav_> Giving me a strange error.. It seems that the request arrives to server and the browser receives the response but it's not converted to a right type
[16:14:03] <oxsav_> any help?
[16:14:27] <Nodex> that's jquery, just change the headers
[16:14:34] <Nodex> * response headers *
[16:15:44] <oxsav_> Nodex change? how ?
[16:15:54] <Nodex> in your client
[16:15:59] <Nodex> go ask in #jquery
[16:16:39] <Nodex> and you probaly shouldn't put readonly data available on the web for all to see, it's not secure
[16:17:27] <oxsav_> Nodex is only a experiment
[17:45:43] <Tomasso> is there some way to query a text index using find, and not the runCommand ? I need to run the query from ruby code, and runCommand is not available as a method in the ruby driver
[18:11:16] <chaotic_good> does mongorestore overwrite existing files ina mongo?
[18:26:48] <justanotherday> Does Mongodb support JOIN? linking between JSON documents please
[18:26:59] <Zelest> No
[18:30:20] <justanotherday> thanks
[18:48:41] <justanotherday> I noticed that Mongodb supports embedding dosuments, can this be used to replace the JOIN commmand
[18:48:54] <ron> yes.. no.
[18:48:58] <ron> no... yes.
[18:49:12] <ron> you have to stop thinking in 'sql'.
[18:50:34] <chaotic_good> www.prevayler.org
[18:50:36] <chaotic_good> aw yeah
[18:50:42] <chaotic_good> throw away oracle
[18:55:39] <TheComrade> Has anyone done anything like write a script that turns some sort of mongo query into command line parameters? So you can do something like: "cmd $(mongoquery ...)" that gets expanded? Or is this a weird approach because the query would be ungainly to use in this fashion?
[18:55:49] <TheComrade> (noobie here :) )
[18:58:53] <testing22> how can i design this aggregation/reporting schema to be multitimezone friendly? http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/ i have two thoughts: hourly aggregation only, this would cause way too much segmentation of data. or aggregating in to different timezone collections resulting in redundant data, lots and lots of it.
[18:58:58] <testing22> thoughts?
[19:01:09] <ron> justanotherday: please keep it in the channel.
[19:31:22] <testing22> justanotherday: embedded documents are not like joining two documents together as they are embedded and there exists no virtual relationship to another document. sql-like joins are done with database references (DBRef) but there are limitations using this. instead of thinking in traditional rdbms rows and columns, think of a "schemaless" document such as JSON and being able to store/index/query these documents. a good example is inste
[19:31:22] <testing22> of having a (o2m) user and addresses table, let's just embed addresses in to each user
[19:32:49] <Astraea5> Hi. I've never used to Aggregate function, and I was wondering if someone could help me with a query.
[19:32:57] <testing22> sure
[19:33:13] <testing22> pastebin your pipeline
[19:33:41] <Astraea5> I haven't got a pipeline yet. I'm still not sure I'm taking the right approach.
[19:34:31] <Astraea5> What I've got is a collection of documents, each with a category field. I want to count each category (got that working with $group) but also return the first 3 documents in each category.
[19:35:05] <Astraea5> I'm not sure if I can do that with the same query.
[19:39:58] <TheComrade> simple question: What is the common way for storing auth information on a client? I want to do like: mongo query.js without having to pass auth information each time.
[19:42:58] <Goopyo> can you do multiple pushes in one update?
[19:43:53] <testing22> on multiple documents or more than one push to a specific field?
[19:44:44] <Goopyo> I want to push to two arrays in a doucment simulatniously
[19:46:54] <testing22> yes you can do that
[19:47:32] <Goopyo> Do I pass a list of ops instead of a document? http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update
[19:47:45] <testing22> update({ ... }, { $push: { "arr1": ... }, $push: { "arr2": ... } })
[19:48:18] <Goopyo> hmm interesting so that doesn't get read as a position arg in pymongo?
[19:48:25] <Goopyo> i.e for upsert
[19:48:49] <testing22> the second argument contains the two pushes, there is no third argument in that ex.
[19:49:30] <Goopyo> oh I see then how are you putting that in a hash considering the keys $push and $push are the same?
[19:51:22] <testing22> you're correct that doesn't work
[19:51:47] <Goopyo> saw that here too http://stackoverflow.com/questions/9823140/multiple-mongo-update-operator-in-a-single-statement
[19:52:10] <testing22> this does: $push: { "arr1": "1", "arr2": "2" }
[19:52:39] <testing22> sorry - still early for me
[19:54:14] <Goopyo> ah great thanks
[20:14:16] <jaytaph> inside a cursor i'm doing a forEach(), how do i get the first item from an array inside each document? e.myarray.0.id doesn't not even compile right?
[20:29:03] <x0f> jaytaph, tried e.myarray[0].id ?
[20:30:21] <jaytaph> x0f: yeah. tried this. It seems that myArray returns multiple bson-objects. i can jsonprint(e.myArray) without problems
[20:30:40] <jaytaph> and in here, i can see my .id, but i don't know how to actually retrieve this value
[20:33:44] <x0f> jaytaph, would be helpful to provide a pastie/gist with your whole query and some sample data (json printed)
[20:33:59] <jaytaph> x0f, sure.. give me a sec
[20:41:57] <jaytaph> x0f: http://pastebin.com/iYwvyiRW So, basically i need to fetch all the ancestors.0.id values, and update this into document.top_document_id
[20:43:19] <jaytaph> i'm trying this with document.find().foreach() which i used before, but never tried to fetch ancestors.0.id so i'm figuring out how to retrieve this.. or maybe i must update all those documents (1.5M of them) another way :/
[20:49:43] <jaytaph> x0f: might have found something: element = e.ancestors.pop(); print(element.id); that works..
[20:50:36] <x0f> jaytaph, you are sure you can't just use the bracket accessor? array[index] to get the first element?
[20:54:53] <jaytaph> x0f: nope.. doesn't work..
[20:55:15] <jaytaph> actually.. pop() doesn't work either.. it's destructive on the ancestors (and obviously, i'm popping the last item, not the first)
[20:59:45] <joshua> Can someone tell me what format this json date is in from the mongoexport dump "$date" : 1384766745205
[20:59:59] <joshua> I want to find a specific date but not sure how to translate it
[21:00:18] <joshua> Something is messed up somewhere (maybe) heh
[21:00:37] <testing22> time in milliseconds
[21:01:05] <joshua> Is there a way to convert it back to ISODate in the shell?
[21:01:18] <joshua> or convert ISODate to that format I guess
[21:02:50] <joshua> Our app keeps saying 2013-03-10T02:31:35.000 (without the usual Z) but I don't know where its pulling it from
[21:03:00] <DoubleAW> that looks like milliseconds since unix eposh
[21:03:02] <DoubleAW> epoch*
[21:03:19] <DoubleAW> since it looks 1000x bigger than usual unix times
[21:04:40] <testing22> 1384766745205 * 0.001
[21:04:44] <testing22> :]
[21:04:54] <DoubleAW> yup
[21:05:05] <DoubleAW> note that if you do new Date().getTime() in JS it will be in milliseconds
[21:05:13] <DoubleAW> so that is most likely what is happening
[21:05:22] <jaytaph> x0f: it seems to be working now with [0]. Main issue that the FIRST (and only) document, didn't had any ancestors in them :(
[21:05:34] <jaytaph> so thanks for the help, it's working now!
[22:16:45] <kfox1111> map-reduce question. If you incremental map reduce into an existing collection, does it process every item in the resulting collection, or only if there is more then one item per key?
[23:13:53] <kfox1111> map-reduce question. If you incremental map reduce into an existing collection, does it process every item in the resulting collection, or only if there is more then one item per key?
[23:52:16] <kfox1111> what happens if a reduce ends up being greater then 16mb? Does the whole job fail right away? Does it continue and then fail? Does it silently fail?