PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 1st of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[07:35:55] <[AD]Turbo> hola
[10:53:12] <sigurding> is there any option with the Java Driver to cast the results() of an aggregation() to an object?
[11:46:41] <pierre-b> Hi guys, having critical issue while inserting massive docs in nodejs: E11000 duplicate key error index, but my IDs are unique (millisecond timestamps)
[11:47:11] <pierre-b> anyone knows please ?
[11:48:06] <pierre-b> it only happens when I do a burst of request (few/sec)
[11:49:35] <SirPereira> Hello guys
[11:49:47] <SirPereira> Is there any possibility having Django 1.5.1 to work with MongoDB?
[11:59:44] <spuz> hello, does mongo do some sort of buffering of writes because I am getting some strange timings when doing lots of writes in a loop
[12:06:03] <spuz> Here's the pseudocode for what I am doing: https://gist.github.com/anonymous/55453cf76534ffb9ee29
[12:06:17] <spuz> can anyone explain why I might get that behaviour?
[13:20:23] <michael_____> is there any benefit to store my collections in different database(names) (not sharding, just multiple names)
[13:29:32] <spuz> Can anyone explain why some writes are fast and some very slow in this pseudocode example? https://gist.github.com/anonymous/55453cf76534ffb9ee29
[13:59:09] <kali> spuz: many reasons... if i get you, the first insert is slow, the following are fast, right ?
[13:59:23] <spuz> kali: not really
[13:59:49] <spuz> the first write is also in a loop, so it goes: writeOne, writeMany, writeOne, writeMany etc. etc.
[14:00:04] <spuz> the first writeOne is fast, but subsequent writeOnes can be slow
[14:00:22] <spuz> and I think that depends on how many writes are done in the writeMany's
[14:00:44] <kali> start by looking at mongodb log
[14:03:40] <michael_____> kali: is there something like slow querys in mongodb?
[14:06:10] <Derick> michael_____: yes
[14:06:24] <spuz> kali: strange, I have done many 1000 writes but only two of them appear in the log
[16:11:08] <ccmonster> anyone know why pymongo would spill back an error refferring to localhost when I'm clearly trying to connect to a remote instance?
[16:17:16] <astro73|roam> ok, note to self: MongoKit is retarded. Never use it for anything more sophisticated than a single-level document
[18:27:08] <Honeyman> Hello. Can anybody suggest me any large enough benchmark of MongoDB/GridFS performance? For some reason, I couldn't find them, even for 10gen-provisioned AWS images...
[18:27:45] <bjori> we dont publish benchmarks
[18:28:13] <bjori> waaaay to many variables
[18:28:37] <bjori> all benchmarks lie, and the only benchmark that matters is a benchmark made precicsly for your usecase
[18:40:41] <kali> bjori: +1 this policy
[19:30:00] <tijmen> Hi all, I'm currently running 2.0.4, if I want to upgrade to 2.4.4, do I have to do this via 2.2 or can I just go for it?
[20:23:17] <ehershey> if I recall correctly it depends on certain features being used or not
[20:23:18] <ehershey> but
[20:25:11] <ehershey> all the info should be here: http://docs.mongodb.org/manual/release-notes/2.4-upgrade/
[20:26:02] <ehershey> like if you're using sharding, "The only supported upgrade path for sharded clusters running 2.0 is via 2.2."
[20:52:00] <Zeeraw> It's been a while since I've been in here.
[20:52:32] <Zeeraw> Anyone mind giving me some pointers for an issue I'm experiencing setting up a replica set?
[20:52:48] <Zeeraw> Been making good use of google for like 1 hour now, and I aint getting anywhere.
[20:53:59] <Zeeraw> The error is "initial sync couldn't connect to 127.0.0.1:27017", 27017 being currently PRIMARY.
[21:33:18] <Kobiak> If I want to delete all of my data and start anew, is it safe to "rm my_db*"?
[21:34:12] <Kobiak> The command "db.loghistory.remove();" is taking forever :|
[22:30:20] <Zeeraw> Figured it out
[22:30:28] <Zeeraw> Turns out I need to use a keyFile.
[23:31:20] <garbagecollectio> i store an object in a mongodb db
[23:31:39] <garbagecollectio> and say it has {diet: 'blah', diet: 'blah2' }
[23:31:53] <garbagecollectio> can i somehow just search for blah2 in the db
[23:32:04] <garbagecollectio> where diet = blah2
[23:32:14] <garbagecollectio> will it iterate over both diet keys?
[23:32:35] <zell> garbagecollectio: a hash can't have several time the same key
[23:32:58] <garbagecollectio> yeah it can thats how its done
[23:33:28] <garbagecollectio> how could you like list multiple diets for a single document
[23:33:40] <garbagecollectio> without having same classifier
[23:34:52] <zell> garbagecollectio: RTFM
[23:35:02] <garbagecollectio> the manual says to have multiple keys
[23:35:08] <garbagecollectio> with the same name
[23:35:10] <garbagecollectio> and then the value
[23:35:57] <zell> db.doc.save({diet: 'blah', diet: 'blah2' })
[23:36:07] <zell> you will have only one key named diet
[23:36:30] <zell> blah2 will erase blah
[23:36:56] <zell> garbagecollectio: again you can't have two time the same key in a hash.
[23:38:32] <Codewaffle> garbagecollectio, save({diet: ['blah', 'blah2', 'stuff']}), find({'diet': 'blah2'})
[23:39:09] <Codewaffle> there's a chance I'm remembering couchdb and not mongodb, though :/
[23:39:12] <garbagecollectio> no
[23:39:50] <garbagecollectio> sell, i used to know how to do this and forgot do u mind telling me
[23:40:55] <zell> Codewaffle: this is correct
[23:45:04] <garbagecollectio> thats not correct
[23:45:09] <garbagecollectio> your saying if you .save as an array
[23:45:18] <garbagecollectio> that .find will magically find it?
[23:45:54] <zell> garbagecollectio: try it
[23:46:10] <garbagecollectio> is this a new feature of mongo?
[23:46:18] <garbagecollectio> because it used to be if you were say making a list of artists
[23:46:24] <garbagecollectio> artist: blah, artist: blah
[23:46:25] <zell> garbagecollectio: sure it's called map reduce
[23:46:34] <garbagecollectio> no this is all wrong
[23:46:38] <garbagecollectio> I'm not talking about map reduce
[23:46:44] <zell> garbagecollectio: you should
[23:46:51] <garbagecollectio> no i shouldnt
[23:46:54] <garbagecollectio> ma reduce is slow in mongo
[23:47:06] <zell> garbagecollectio: depends of who does it
[23:47:25] <zell> garbagecollectio: did you try using depency injection ?
[23:48:01] <zell> garbagecollectio: it really go faster, mostly when you use hash with same key like {artist: 1, artist: 2, artist: 3}
[23:48:30] <zell> gd night
[23:49:05] <garbagecollectio> i just said that
[23:49:08] <garbagecollectio> weird