PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 2nd of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:03:23] <Codewaffle> it's not map reduce, it's just the way it works
[00:03:29] <Codewaffle> might only work if the field is indexed, though
[00:04:12] <Codewaffle> garbagecollectio, http://docs.mongodb.org/manual/core/indexes/#index-type-multi-key
[00:04:55] <Codewaffle> (it does require that the field be indexed)
[00:30:00] <garbagecollectio> how do i save multiple reference able values
[00:30:07] <garbagecollectio> for a single key
[00:30:27] <garbagecollectio> for example, diet: 'low fat', diet: 'high energy'
[00:33:40] <boterock> hello, i'm rather new to nodejs and mongodb, i am using mongolian, how do I say mongo to find in collection where id = any value of an array?
[00:39:12] <garbagecollectio> is mongodb good for doing calculations
[01:06:57] <Aartsie> boterock: i think it is better to use mongoose
[01:07:53] <Aartsie> boterock: http://mongoosejs.com/ is created by 10gen
[04:30:21] <zzing> Is there something for mongo like phpmyadmin is for mysql?
[07:40:14] <[AD]Turbo> ciao all
[10:48:19] <Jalava> Hmm, is this bug: var obj = {"a": 1, "b": 1}; db.foo.save(obj); var obj2 = {"b": 1, "a": 1}; db.foo.save(obj2); db.foo.find();
[10:48:32] <Jalava> wait typo
[10:49:26] <Jalava> var obj = {"_id": {"a": 1, "b": 1}}; var obj2 = {"_id": {"b": 1, "a": 1}}; db.foo.save(obj); db.foo.save(obj2); db.foo.find();
[10:49:43] <Jalava> object field order is meaningful in _id for upsert / insert?
[10:55:16] <Jalava> http://pastebin.com/81EfUqaw
[10:58:03] <kali> Jalava: yes. bson hash are ordered
[10:59:26] <Jalava> kali: shouldn't mongodb order fields before hashing?
[11:00:14] <Jalava> because you cannot change field order of regular object on javascript
[11:00:46] <kali> Jalava: well, that's a javascript problem :)
[11:02:22] <kali> Jalava: at least in JS, hashes are ordered... in many languages, "usual" hashes are not, so it leads to many interesting situations
[13:06:10] <Nodex> Quiet in here today
[13:06:20] <kali> yep
[13:06:37] <Nodex> couple of not bad TV shows just started kali
[13:06:43] <Nodex> "under the dome"
[13:06:47] <kali> and dexter is back :)
[13:06:50] <Nodex> and... "Crossing lines"
[13:06:56] <Nodex> yes and Dexter :D
[13:07:14] <kali> under the domes and crossing lines, ok, i'll have a look
[13:07:30] <Nodex> under the dome is a book by Stephen King
[13:07:44] <kali> i had a look to Graceland, but it's pure crap with a few hot guys
[13:07:53] <Zeeraw> Ah, Is Under the Dome as good as people claim.
[13:08:09] <Nodex> Zeeraw the firstone was, got the second one to watch later
[13:08:23] <Nodex> first one *
[13:08:34] <Zeeraw> I've been contemplating giving it a shot.
[13:09:07] <Zeeraw> Struggled with my mongo replica set yesterday, so I had no time to watch dexter.
[13:09:17] <Zeeraw> I will today though. :D
[13:09:27] <Nodex> I don't think this final season will be good :/
[13:09:51] <kali> i'm curious about what they will do with charlotte rampling character
[13:09:52] <Zeeraw> yeah I doubt it as well.
[13:09:58] <Zeeraw> never the less, I'm watching it.
[13:12:52] <Nodex> at least 24 will be back soon :D
[13:13:51] <BurtyB> god knows how much they'll manage to screw that one up
[13:14:11] <Nodex> it's all the original cast, crew and EP's, they can't really screw it up lol
[13:14:32] <Nodex> should be a bit faster paced with 2 hours per episode though
[13:14:33] <kali> the original cast ? but they've killed them all :)
[13:14:46] <Nodex> original cast I mean Jack and Chloe
[13:14:50] <kali> :)
[13:15:14] <Nodex> will be interesting to see what they do with it
[14:16:32] <moe> hi i have a question regarding mongodb replication + sharding
[14:18:08] <moe> I have 3 machines : A, B, C . I have replication turned on. So A is primary, and B and C are secondary. Suppose I shard the db. I believe all writes will still go to A. So what is the advantage
[14:19:24] <moe> anyone
[14:26:46] <kali> moe sharding is on top or replica sets
[14:26:53] <kali> moe: every shard is made of a replica set
[14:27:12] <kali> ("on top OF")
[14:27:16] <moe> can you explain
[14:27:19] <moe> how
[14:28:01] <moe> shard only contains part of the data. BUt with replication turned on, a secondary is still laggin behind the primary. So writes will still have to be done on the primary
[14:28:21] <moe> So how does that help in write scalability
[14:30:53] <kali> if you want 4 shards, you'll have 12 machines: 4 replica set of 3 servers. each shard has a quarter of the data.
[14:31:27] <kali> each shard has a primary, so each primary will handle a quarter of the whole write load
[14:39:38] <touilltouill> Hi everyone, i'm trying to connect to my replicaset with php but every time i try to connect i have a php fatal error : Uncaught exception 'MongoConnectionException' with message 'No candidate servers found' can someone help me to solve my problem :)
[14:50:31] <touilltouill> Hi everyone, i'm trying to connect to my replicaset with php but every time i try to connect i have a php fatal error : Uncaught exception 'MongoConnectionException' with message 'No candidate servers found' can someone help me to solve my problem ? :)
[14:58:59] <Nodex> please pastebin your connection string
[15:07:36] <touilltouill> her my php connection lin
[15:07:37] <touilltouill> http://pastebin.com/Ecu2bmfY
[15:09:02] <touilltouill> this one sorry http://pastebin.com/FQkPC8KT
[15:11:13] <touilltouill> is the connection linethe problem ?
[15:18:46] <touilltouill> non ?
[15:21:46] <balboah> how do you properly set a $comment with pymongo? it seems cursor.count() stops working if you do .find({"$query": …., "$comment": ...})
[15:28:50] <balboah> why aren't these giving the same result? db.users.find({"$query": {"gender": "female"}}).count() vs db.users.find({"gender": "female"}).count()
[15:49:28] <ehershey> balboah: I think using the $query operator changes how find() return values work
[15:49:34] <ehershey> http://docs.mongodb.org/v2.2/reference/operator/query/
[15:49:38] <ehershey> Note Do not mix query forms. If you use the $query format, do not append cursor methods to the find(). To modify the query use the meta-query operators, such as $explain.
[15:50:55] <balboah> ehershey: thanks. This causes me to have to know when my code might want to count later on, and breaks adding $comment :(
[16:13:45] <grouch-dev> hi all. I am running a purge query. 5 members of replica-set, 1 of which is arb. On the primary I run a db.collection.remove(...) and it fails after a few minutes because the Primary "can't see a majority".
[16:14:22] <grouch-dev> Nothing else is running during this time
[16:15:18] <grouch-dev> At first I see messages "is down (or slow to respond)"
[16:40:53] <bee_keeper> hi, i'm using the twitter streaming api to get constant info from twitter, some of which i will store. in your opinions, is mongodb a good use case for this scenario?
[17:42:29] <grouch-dev> seems like whenever mongo has a heavy load, it can't keep a primary
[17:44:47] <grouch-dev> In a 3 server (1pri, 1sec, 1arb) environment it works, but with 2 more secondaries, the primary loses majority and the query fails
[17:45:24] <jaraco> What's the planned schedule for 2.4.5? We're holding off an upgrade for a fix included in that build.
[17:51:09] <orngchkn> Is anyone around that could help me figure out how to get Mongo to stop filling up logs with "warning: ClientCursor::yield can't unlock b/c of recursive lock ns" (hundreds of megs of logs per minute) while doing a findAndModify query?
[18:06:02] <thismax> Anyone know how to query for a document with an array that does NOT contain a certain element?
[18:07:39] <thismax> like if I had two documents: { features: ['a', 'b'] } and { features: ['c', 'b'] }, and I wanted to find any document that didn't have feature 'a'?
[18:13:10] <eagen> thismax: http://docs.mongodb.org/manual/reference/operator/not/
[18:14:17] <thismax> eagen: does this look right? collection.find({features: { $not: 'a' }})
[18:21:06] <eagen> thismax: Just did some testing. I think this is better: find({"features": { $exists: false}})
[18:22:12] <thismax> eagen: wouldn't that give me documents without a features array at all?
[18:22:51] <eagen> Bah sorry yes.
[18:23:16] <eagen> thismax: Ok then. I think you want $nin: http://docs.mongodb.org/manual/reference/operator/nin/#op._S_nin
[18:24:02] <eagen> so find({ features: {$nin: 'a'}});
[18:25:24] <thismax> ooh, I was able to make it work with find({features: {$ne: 'a'})
[18:26:08] <thismax> $nin gives me uncaught exception: count failed: { "errmsg" : "exception: $nin needs an array", "code" : 13277, "ok" : 0 }
[18:26:18] <thismax> the docs are confusing :S
[18:26:33] <eagen> Does find( {features: {$nin: ['a']}}) work?
[18:26:47] <thismax> yep!
[18:27:23] <thismax> so it looks like find({features: { $ne: 'a' }}) and find({features: { $nin: ['a] } }) are roughly equivalent
[18:27:24] <eagen> Ah ok.
[18:27:51] <thismax> thanks eagen!
[18:28:25] <orngchkn> Can anyone explain how a query with $explain: true can be running for 30 seconds or more?
[22:23:56] <awpti> Howdy folks. I used this block ( http://pastie.org/8104889 ) to create a boatload of records to fiddle with mongodb, yet test_collection appears empty -- db.test_collection.find() returns nothing and 'it' says there's no curse. What am I missing here?
[22:24:47] <leku> use test_collection
[22:24:48] <leku> show collections
[22:24:51] <leku> what does that return?
[22:25:25] <awpti> Returns a blank line.
[22:26:08] <awpti> There's about 600MB of data in /var/lib/mongo
[22:26:30] <leku> is the save() the appropriate way to do that?
[22:26:33] <leku> i can't remember
[22:26:49] <leku> i'm new to mongo myself, taking the 10gen class online atm
[22:26:55] <awpti> That block of JS was right off their website.
[22:26:58] <leku> ah
[22:27:12] <leku> ok i have another theory then
[22:27:24] <leku> try dropping that stuff and then running it again?
[22:27:33] <awpti> Can't hurt :)
[22:27:42] <leku> this may or may not have happened to me last night
[22:27:55] <leku> but I upgraded an old database to mongodb 2.4.4 i think
[22:28:11] <leku> all my stuff disappeared so I just dropped/tried again
[22:29:47] <awpti> hmm.. should I expect the datadir to remain the same size?
[22:30:49] <leku> that seems awfully big
[22:31:14] <leku> i guess it is 1000000 records tho
[22:32:33] <leku> gotta run, good luck
[22:32:55] <leku> If you're interested in learning a lot mroe about mongodb, python, js, and json, check out the free online classes on 10gen website
[22:33:05] <leku> https://education.10gen.com/dashboard
[22:36:56] <awpti> I'll check that out. I'm pretty familiar with Python.
[22:37:09] <awpti> Also, thanks leku. Re-running it worked.
[23:57:07] <kurtis> Hey guys, I'm trying to come up with a good solution for a "big data" problem. On my web-end, I use several million documents to perform various computations. These same set of documents are used for all of the computations. Is there a smart way to cache those documents for a short period of time? I thought about storing all of the ObjectIDs in Redis or something similar (for 15 minutes) but I'm not sure if that will help much