PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 19th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:16:27] <Bilge> Can you apply negative increment to an increment type field?
[00:17:26] <Bilge> OK I guess so according to docs
[01:08:36] <opus_> oh I found my problem... I was running 2.0 thats why $elemMatch didn't work
[04:09:46] <dblado> how come when I call a custom function in reduce(), the code of the function is returned to the new collection?
[05:53:53] <SomeGuy678> Hi. I'm trying to remove an element from an array contained within a document if it has a field that matches my query, but only if it's the first element in the array. Here's what I've got so far: http://pastie.org/private/3jk3av9klrsgtmxvill7q
[05:56:10] <SomeGuy678> I basically need to know how to write a query for $pull that only matches the first element in the array
[05:56:23] <SomeGuy678> I think :)
[06:53:31] <samurai2> hi I try to use mongodump and mongorestore to do backup to my sharded and replicated database, but now one of mongod config server always failed to start
[06:55:07] <kali> what does it says ?
[07:07:37] <opus_> Can I use mongoose on an existing mongodb database? does the Schema have to be exactly the same?
[07:07:56] <opus_> I mean, can I use mongoose on an existing collection
[07:10:28] <kali> opus_: i can't help specifically with mongoose, but usually ODM are able to work with a pre-existing data. sometimes a few rare patterns or corner case will be hard to manage though
[07:10:37] <kali> opus_: so my educated guess is "yes"
[07:13:19] <opus_> Is that what the Sluggable module is for?
[07:33:37] <[AD]Turbo> hola
[07:55:36] <fotoflo> Hello all. We have a question: We have two collections: A and B. A is a collection which contains links to objects from B. Is there a nice way to get A in one query (with objects from B embedded instead of returned as Object_ID)
[07:56:44] <kali> fotoflo: this is the equivalent of a join. there are no pleasant ways
[07:57:37] <fotoflo> ok
[07:58:59] <fotoflo> so for example, we have albums which contain songs. we want to be able to access songs by themselves as entities and albums as entities, but every time we query an album we will likely want all songs …. whats the optimal way to structure this?
[08:00:14] <Ephexeve_laptop> Hey gys, I did var x = db.mydb.find(); in python, you can slice everything by doing x[:] <- how is this in mongo?
[08:01:18] <kali> fotoflo: you can probably embed songs in the albums as there will be a sensible number per album. add an index on the id of song in the nested array to be able to lookup a song
[08:02:34] <fotoflo> hmm ok
[08:03:05] <fotoflo> the a song lookup would be db.albums.songs.find() ?
[08:03:13] <fotoflo> s/the/then
[08:03:30] <kali> db.albums.find({ "song.id" : <...> })
[08:03:35] <kali> or songs.id
[08:04:30] <kali> that would return the whole album, with all the song
[08:04:32] <kali> s
[08:04:41] <fotoflo> hmm, what if i just want the song?
[08:04:59] <fotoflo> and another question, lets say two albums have the same song
[08:05:07] <kali> ha ! n to n
[08:05:12] <NodeX> lol
[08:05:12] <kali> that's another beast
[08:06:10] <fotoflo> for example: the best of jimi hendrix and the Jimi Hendrix Experience both have the same cut of Hey Joe
[08:06:12] <kali> again, no pleasant ways to do that. either you denormalize (storing the song twice) or have two collections and do join by hand
[08:06:55] <fotoflo> lets say I'm ok joining by hand, but how do i find them both?
[08:07:17] <fotoflo> db.albums.find({ "song.name" : "hey joe" })
[08:07:18] <fotoflo> ?
[08:07:40] <fotoflo> that would return two albums right?
[08:07:54] <kali> if we assume the case is right, yes
[08:09:16] <kali> if you want to "search" (by text, not by id) you'll need to add something else to the mix, like elasticsearch or solr
[08:09:50] <fotoflo> ? regular expressions works right?
[08:10:26] <akamensky> @fotoflo: Yes, they work
[08:10:50] <kali> fotoflo: they do, but they will only use the index if there is a fixed prefix in the expression
[08:11:11] <kali> fotoflo: so something like "/hey joe/i" will perform a full table scan
[08:11:36] <kali> fotoflo: whereas "/^Hey/" will use the index
[08:12:49] <fotoflo> ok
[08:13:23] <NodeX> a common pattern is to store the words you want to serach as an array split on :space: and regex that
[08:13:51] <fotoflo> great. thanks
[08:13:55] <NodeX> and or the reverse of them for psuedo widlcarding
[08:14:34] <NodeX> performance wise it's really not a good idea to do lots of regexing
[08:15:55] <fotoflo> NodeX: doesnt sound like it.
[08:17:03] <NodeX> doesn't sound like it?
[08:53:14] <naquad> if i need to run bunches of map/reduce commands on a pretty loaded database, then as i understand i should create a slave intance synced with master and run them there to avoid efficiency loss on master. right?
[08:53:36] <samurai2> kali : it's said something like this : bin/mongorestore --host cluster2 --port 20003 --db config /mongodb/mongodb-linux-x86_64-2.2.0/dump/config
[08:53:36] <samurai2> terminate called after throwing an instance of 'std::bad_alloc'
[08:53:36] <samurai2> what(): std::bad_alloc
[08:55:15] <samurai2> Even after doing the mongodump, now I try to printShardingStatus(), but the information about how many chunks on cluster1 and cluster2 doesn't print out anymore
[09:28:14] <remonvv> I prefer axiom
[09:28:37] <majoh> somebody beat me to it though... ;(
[09:32:25] <samurai2> I prefer hypothesis :)
[09:36:44] <NodeX> I prefer webuser-230948320948243
[09:50:50] <opus_> is there a way to force _all_ connections to mongodb to be safe and sync
[09:51:58] <opus_> i'm getting the strangest errors
[09:54:46] <remonvv> There is. What language?
[09:54:50] <remonvv> And what errors?
[10:01:09] <nacer> hi there
[10:01:27] <nacer> how can y delete a collection wih a bad name "my-collection"
[10:01:36] <nacer> ?
[10:01:53] <nacer> mongo client don't like this name
[10:08:19] <sandstrom> My replication has stalled. Anyone know what may have caused this and how the problem can be remedied? http://pastie.org/private/xhrumdkzplne9n3nfm1qg
[10:10:55] <remonvv> nacer, db.getCollection("my-collection").drop()
[10:11:06] <nacer> remonvv: Thanks \o/
[10:11:14] <remonvv> yw
[10:11:50] <remonvv> sandstrom, have relevant bits of log from the secondary?
[10:12:03] <remonvv> i'm assuming you mean the secondary stopped syncing
[10:12:20] <sandstrom> remonvv: yes, it's some 30 minutes behind the master
[10:12:35] <sandstrom> I'm also at 90% disk space usage, don't know if that matters.
[10:12:51] <sandstrom> I'll get the log output, 1 sec
[10:13:02] <remonvv> is the secondary a similar machine to the primary?
[10:14:22] <sandstrom> remonvv: they're not that similar, one is AWS EC2 instance, the other is VPS from a different provider, different ubuntu version
[10:14:32] <sandstrom> remonvv: "error failed to allocate new file: /dbdata/mongodb/production.6 size: 2146435072 errno:28 No space left on device" I think you found it :D
[10:15:37] <sandstrom> remonvv: Does it sound like a good idea to lock the database (db.runCommand({fsync:1,lock:1});), increase the storage and then unlock again? Or do I need to shutdown the database instance?
[10:20:01] <remonvv> sandstrom. Okay, so the secondary cannot preallocate a new database file.
[10:20:44] <remonvv> You can increase storage volume hot on EC2. Or is the secondary the one on the different provider?
[10:22:20] <remonvv> Increasing storage is the solution to your problem. The reason I asked about the hardware is that as a general rule you want all members in your set to have roughly similar performance so secondaries don't start lagging just because they cannot process the writes at the rate they're coming in from the primary.
[10:22:23] <remonvv> That's not your problem here though.
[10:45:09] <sandstrom> remonvv: The secondary is on AWS. With hot, is shortcut provided by AWS for this, or are you thinking of unmount->snapshot->new volume from snapshot->resize filesystem->mount?
[10:54:36] <sandstrom> "Can't take a write lock while out of disk space", guess I have to shut down the server, right?
[11:18:17] <wereHamster> or add more disk space
[11:21:32] <remonvv> sandstrom, ah, yeah, too late then. And yes, you'd have to remount
[11:21:43] <remonvv> Hot I meant just take secondary down for a bit and resync
[11:21:47] <remonvv> Sorry, was afk
[13:18:04] <[AD]Turbo> is collection.update({ '_id' : { $exists : true }, ...) equivalent to is collection.update({}, ...) ?
[13:19:09] <NodeX> yeh
[13:19:28] <[AD]Turbo> thx
[13:29:08] <Jt_-> hello guys, i have a doubt, i need to clustered mongodb, i try to use replica sets with 2 node and one arbiter
[13:29:23] <Jt_-> my question is in case of brain split this configuration is safe?
[13:32:42] <Vile> Guys, a question. Is it possible to replace all the config servers on a shard without taking the whole thing down?
[13:38:37] <wereHamster> one by one
[13:39:38] <Vile> Ok. changing a question slightly. Can it be done without physically copying the database? i.e. no rsync/cp/etc?
[13:40:34] <Vile> Like: add one more server, attach it to the config replica, wait until up to date, detach one of the existing ones, repeat
[13:40:48] <wereHamster> yes, that's how mongodb replication works.
[13:41:08] <Vile> but not how config DB works, it seems
[13:41:35] <Vile> http://docs.mongodb.org/manual/administration/sharding/#migrate-config-servers-with-different-hostnames
[15:06:34] <anthroprose> How would I check to see if an iterative array has a member that contains a matching value from an associative array? i.e. 'container' => array (0 => array('name' => 'value1'), 0 => array('name' => 'value2')) I'd like to search for any 'container' that has a 'name' == 'value2'
[15:07:19] <anthroprose> its like a 'name.value' $in 'container' or something...
[15:28:18] <wereHamster> find({ 'container.name': 'value2' })
[15:32:21] <anthroprose> were: actually just dug that out... I didnt realize it would handle the iteration...
[15:32:25] <anthroprose> thanks
[16:26:25] <jordanorelli> hmm, anyone know if it's possible to check if an item is in an array within a $project block in aggregation framework? that is, i have a document that has a field that is an array, and i want to $project true if this array contains some value, and false if it doesn't.
[16:28:48] <NodeX> jordanorelli : I dont think it is
[16:46:38] <jordanorelli> hmm, the $eq operator in aggregation framework: http://docs.mongodb.org/manual/reference/aggregation/eq/
[16:46:50] <jordanorelli> I can't understand that documentation. I don't know how to use that operator.
[16:47:46] <jordanorelli> oh christ it's really {"$eq": [first, second]}
[16:48:00] <_m> Yep
[16:48:04] <_m> http://docs.mongodb.org/manual/reference/aggregation/#expressions
[16:48:28] <_m> While there's no sample usage of $eq, the other samples give a decent reference point.
[16:53:50] <jordanorelli> unfortunately you can't use $eq to test an array field for a particular value.
[16:53:53] <jordanorelli> which is the one thing i need.
[16:54:03] <jordanorelli> and there's no $in or $contains that i can see.
[16:54:08] <jordanorelli> for agg framework, that is.
[17:51:07] <eka> hi all... to make a shard all mongod instances should be replicasets?
[17:51:45] <eka> can't they be stand alone mongod?
[17:54:23] <carli2> hi
[17:54:48] <carli2> are there any development tools for mongodb database-side code?
[18:02:56] <_m> carli2: Development tools?
[18:03:36] <carli2> _m: yep. sth like the firebug in firefox to analyze data structures, edit stored procedures, step through stored procedures or such
[18:06:20] <carli2> (sth:=something)
[18:45:04] <kenyabob_> if I am doing an update, and adding a new field, how would I reference an existing value in another field on the record
[19:11:12] <boll> Has anyone experimented with constructing a "mongo light" embeddable database, to directly handle bson-data retrieved from mongo in a desktop applicationn?
[19:56:31] <jmpf> after you unset a field across your collection - shouldn't that reduce the datasize of the collection? do you have to do something else afterwards?
[19:58:25] <ismarc> mongodb allocates space in files
[19:58:39] <ismarc> deleting fields (or even documents) won't reduce the size of the datastore
[19:58:52] <ismarc> you need to run a compaction (move all documents to the lead of the files and empty space to the end)
[19:58:59] <ismarc> and then run a database repair
[19:58:59] <mids> http://docs.mongodb.org/manual/reference/command/compact/
[20:01:53] <ismarc> an important note…if your data file size is > 50% (plus or minus a small amount), you won't be able to run the repair. You'll need to pass it a repairpath to use as the temp space for the repair on startup, and you cannot use the repairpath if journalling is turned on
[20:02:28] <ismarc> (you can temporarily disable journalling in the config, etc., but was a completely unexpected thing that bit us)
[20:03:50] <jmpf> I'm looking at a collection that is 90G on a close to a terrabyte partition - was hoping rming the field would reclaim most of that
[20:20:04] <thomas_> Hi I'm trying to connect to my mongo server from my local computer https://gist.github.com/3920451 any help would be greatly appreciated!
[20:24:28] <mids> thomas_: you cant connect to mongodb that way; https://jira.mongodb.org/browse/SERVER-3300
[20:25:46] <mids> instead use something like: mongo -u user -p pass domain.com:27017/database
[20:28:04] <thomas_> mids: I still get an error https://gist.github.com/3920451
[20:28:07] <_m> See also: man 1 mongo
[20:28:16] <thomas_> mids: I updated it
[20:28:52] <_m> thomas_ You need to user *your* username and password, not the words "username" and "password"
[20:29:05] <mids> \o/
[20:29:20] <_m> And by "your username and password" I mean the username and password for your mongo user, not your shell login.
[20:29:22] <thomas_> _m : hah I am changing it
[20:29:32] <thomas_> _m : yes I know
[20:29:38] <_m> Obviously.
[20:30:38] <Vile> _m: how do you know, maybe username and password are his username and password?
[20:30:59] <thomas_> Vile: haha and I have access to domain.com
[20:31:45] <Vile> domain.com is actually really easy. all you need is to run your own dns
[20:32:22] <_m> thomas_: Good luck.
[21:05:00] <thomas_> I'm still having issues connecting to my server https://gist.github.com/3920451
[21:05:18] <thomas_> Any assistance would be awesome!!!
[21:06:21] <Gargoyle> thomas_: o_O
[21:07:09] <Gargoyle> If that is genuinely what you are trying to do, I am amazed that you have managed to install mongo at all. Have you actually read what you are typing?
[21:07:19] <thomas_> @Gargoyle: Why you lookin at me funny?
[21:07:36] <Gargoyle> thomas_: Do you own "domain.com" ?
[21:08:29] <thomas_> @Gargoyle: I have swapped all the "variables" in the examples to not publicly share my creds.
[21:10:12] <Gargoyle> thomas_: Pretend for 1 min that you are me, and you have just looked at that gist for the first time. Think of the first 5 questions you are going to ask and them answer them and post a new gist.
[21:11:19] <thomas_> Gargoyle: ok
[21:29:45] <thomas_> Gargoyle: Added a comment to the gist with some details https://gist.github.com/3920451
[21:32:15] <Gargoyle> thomas_: Have you checked the docs to see if connecting from a 2.0.1 client is possible? Also, your local connection test does not use auth, so should you be using auth on your remote conneciton?
[21:33:14] <Gargoyle> Sorry, 2.0.6 client. But same thing to check?
[21:36:37] <Gargoyle> thomas__: Did you get those messages?
[23:41:03] <progolferyo> I have a shard setup, the first shard is not a replica set. Does anyone know the best way to convert the shard to a replica set and have the mongo config server pick up the change?
[23:49:48] <progolferyo> any suggestions? when i make the shard a replica set, the config server just does not pick up the change