PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 3rd of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:59] <SpNg> I'm working on a node.js app, and I have been trying to decide if it's better to open a new connection for every Mongo query, or is it better to maintain a single open connection and run all request through it?
[00:09:58] <timeturner> it's always the latter
[00:10:30] <timeturner> there's no reason to open multiple connections from the same app unless you need separation of concerns or something
[00:11:25] <SpNg> That's what I thought, some articles online were suggesting opening up several connections and maintaining a connection pool
[00:11:30] <SpNg> I always try to maintain one
[00:13:55] <SpNg> timeturner: this is the thread I'm talking about
[00:13:56] <SpNg> http://stackoverflow.com/questions/10656574/how-to-manage-mongodb-connections-in-a-nodejs-webapp
[00:14:21] <SpNg> it suggests that the mongo connection can only handle 1 query at a time
[00:14:30] <timeturner> that's not true
[00:14:41] <timeturner> the connection is an async worker basically
[00:15:29] <timeturner> so even if you open more than one connection and shell out multiple commands at one time the mongod will still process at the same rate
[00:17:23] <SpNg> Ok. Well the good news is that's how it has been engineered right now
[00:17:33] <SpNg> so we are in good shape
[00:18:15] <SpNg> this all came up after the app lost connection with the mongod server after running for a long time
[00:46:22] <sander__> Do anyone have a geospartial guide to mongodb?
[03:03:02] <bmercer> how can I change the formatting of all the strings in my table?
[03:03:18] <bmercer> I've got a column price that I want to change from 1234 to 12.34
[03:04:23] <TkTech> A script and a whole lot of $set's
[03:07:27] <bmercer> I just changed them from an int to a string :)
[03:35:32] <mikesm> alot of examples im seeing are using db.collection('name', {safe:true} .. I see that the option 'safe' is now deprecated. what do we use instead?
[03:41:59] <TkTech> "w"
[03:42:36] <TkTech> mikesm: Say you're using pymongo, http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.save
[03:42:48] <TkTech> mikesm: Note it recommends using "w" in the safe deprecation note.
[03:47:36] <mikesm> TkTech: thank you
[03:47:45] <mikesm> very much
[03:49:09] <TkTech> No problem.
[04:40:04] <MacWinner> facebook returns a multidimensional array for a user.. i'm totally new to mongodb, but is it totally bad if i just convert that array into a JSON string and store it into mongodb?
[04:40:39] <MacWinner> like with a php json_encode function.. just shove the return value into mongodb if it's not null
[06:54:18] <key2> hello
[06:55:07] <key2> does anyone know if the tutorial on mongodb.org is broken?
[06:55:46] <key2> I'm at the step that asks you to save a document, and then find it
[06:55:48] <key2> specifically
[06:55:50] <key2> db.scores.save({a: 99});
[06:55:53] <key2> then
[06:55:58] <key2> db.scores.find();
[06:56:05] <key2> but the return on the find is just [ ]
[06:57:52] <ron> are you ure that's what the tutorial says? doesn't it say db.scores.save({"a":99}); ?
[07:00:50] <key2> db.scores.save({a: 99});
[07:00:58] <key2> copied and pasted straight from the shell
[07:01:12] <ron> where does the tutorial say that?
[07:01:46] <key2> step 3, "Saving"
[07:02:00] <key2> 3. Saving Here's how you save a document to MongoDB: db.scores.save({a: 99}); This says, "save the document '{a: 99}' to the 'scores' collection." Go ahead and try it. Then, to see if the document was saved, try db.scores.find(); Once you've tried this, type 'next'.
[07:14:47] <key2> btw I added quotes like in your example and I'm still getting the same
[07:48:09] <joshua> Hey are there any smart people awake
[07:48:54] <joshua> I am just wondering when the balancer is still balancing will I keep seeing "too many chunks to print, use verbose if you want to force print" or is it going to display like that forever
[07:58:22] <joshua> Ah, figured out verbose mode sh.status({verbose:true})
[08:40:23] <Zelest> I have a replicaset and 3 servers running 2.0.6 .. is it safe/supported to simply upgrade from 2.0.6 to 2.2.2, each node at the time?
[08:40:34] <Zelest> or what is the best practice way of upgrading a production server?
[08:48:56] <NodeX> upgrade is fine
[08:49:02] <NodeX> just make sure you have a backup
[08:51:09] <ron> on second thought... let him burn.
[08:53:45] <NodeX> bleh
[08:53:57] <ron> you love me.
[08:55:23] <NodeX> 3 users, load average: 42.77, 45.53, 45.27
[08:55:25] <NodeX> eeek
[08:56:38] <ron> you're so cool
[08:58:38] <NodeX> alot cooler than you but not so cool
[09:00:54] <ron> learn english.
[09:03:56] <NodeX> I am not cool enough to elarn english
[09:03:59] <NodeX> learn*
[09:08:13] <NodeX> go get a job you bum
[09:08:39] <Zelest> http://php.net/manual/en/mongocollection.save.php
[09:08:47] <Zelest> is that example really correct?
[09:08:57] <Zelest> nvm, it is.
[09:09:06] <ron> NodeX: I have a job. I'm a meeting. so NYA.
[09:11:06] <NodeX> you're a meeting?
[09:11:12] <NodeX> LOL, learn English
[09:11:24] <joshua> Zelest: I upgraded our cluster from 2.0 to 2.2 without any issues.
[09:12:17] <ron> NodeX: dude, I have an excuse. I'm not a native speaker ;)
[09:13:11] <joshua> I converted ext3 to ext4 without reformatting and didn't lose any data. This is good news for our production environment when I get around to doing it on all the rest of the machines
[09:13:50] <NodeX> nor am i
[09:14:28] <ron> rrright.
[09:19:40] <dawra> ecuse me
[09:19:50] <dawra> i did some update() and the records are gone/deleted :(
[09:20:50] <ron> OMG!
[09:20:54] <ron> WHAT DID YOU DO?!?!
[09:20:59] <ron> seriously, what did you do?
[09:21:14] <joshua> Do you have a backup
[09:21:59] <dawra> even i am not getting it, i dont think anythings wrong with code, do you understand PHP?
[09:22:27] <ron> NodeX does.
[09:23:02] <dawra> i have an 1 day old backup :(
[09:24:48] <dawra> oh jesus
[09:25:08] <NodeX> can you pastebin your update code?
[09:25:10] <dawra> i passed a string "123" not (int) 123
[09:25:14] <dawra> hell
[09:25:28] <NodeX> #1 rule - cast everything ;)
[09:26:07] <dawra> i see, you may laugh on me now /me runs away
[09:27:22] <dawra> well, i wonder is there an easy way to choose all records whose version_id field is type string and convert to int ?
[09:27:27] <dawra> s/records/documents
[09:27:41] <dawra> i was actually doing some migrating, porting stuff
[09:27:54] <royh> ohai there.
[09:28:11] <NodeX> dawra : no, you will have to loop it
[09:28:32] <royh> is there a way to limit the resources a query can use? as in cpu time. I'm going to set up a replicaset for a number of different services and I don't want one query to take down the whole thing.
[09:28:34] <dawra> i see, ok thanks.
[09:28:35] <NodeX> (unless they're all the same integer)
[09:28:42] <dawra> no all different
[09:28:46] <NodeX> royh : no
[09:28:58] <royh> NodeX: ah, that's too bad. :(
[09:29:01] <dawra> atleast i havent lost data which would have been a serious pain to extract from 1 day old backup :D
[09:29:45] <royh> another question. with replicasets, is it considered OK to have an arbiter and a mongdb running on the same box?
[09:43:35] <joshua> royh: It will work but it kinda defeats the purpose
[09:43:45] <joshua> If that one box dies you are just left with a single node
[09:54:59] <royh> joshua: yeah, but in a cluster with only two nodes you need an arbiter as well right?
[09:55:48] <joshua> Yeah but an arbiter that goes down with a node isn't really functional
[09:55:56] <joshua> might as well just have 2 nodes
[09:56:28] <royh> won't that generate issues during the vote?
[09:56:38] <royh> having an even number of nodes i mean
[09:57:04] <Gargoyle> royh: Just follow the guidelines and have 3 real nodes.
[09:57:30] <royh> Gargoyle: that's what I said as well, but i lacked good arguments for my boss...
[09:57:59] <Gargoyle> royh: It's not an argument. RS req 3 nodes min.
[09:58:17] <Gargoyle> Or think of another backup / DR plan.
[09:58:24] <joshua> Arbiter doesn't need much resources, you could throw it on a VM or wherever
[09:58:40] <joshua> It doesn't store any actual db data
[09:58:52] <royh> most of our servers are vms
[09:59:13] <joshua> you could give it less ram and disk than your primary/secondary
[09:59:25] <royh> but yeah, I'll tell him some very experienced people in the mongodb channel told me so ;)
[09:59:59] <royh> yeah, it's not really an issue about resources. the problem is to have to justify the maintainance of another box i guess
[10:00:17] <royh> we use less than 50% of our capacity
[10:00:22] <Gargoyle> royh: Disaster recovery!
[10:00:40] <joshua> If I had any say I would handle our own setup differently
[10:00:51] <joshua> I would put 2 nodes in one location and 2 nodes in another
[10:01:13] <Gargoyle> joshua: And when the link between the two locations dies?
[10:01:36] <Gargoyle> joshua: All your servers go down
[10:01:41] <joshua> Make that 3 then. heh
[10:01:50] <joshua> stick an arbiter in a 3rd location
[10:02:02] <Gargoyle> ^^ That's the trick!
[10:02:11] <joshua> All our stuff is single homed right now. Its kinda lame but I
[10:02:15] <joshua> I'm not the architect
[10:02:16] <royh> you can have more than one arbitrer right?
[10:02:35] <joshua> royh: You can have more than one secondary
[10:02:42] <Gargoyle> royh: You can. But it's name suggests you shouldnt
[10:02:44] <royh> ok
[10:03:08] <royh> what if the arbitrer goes down and you're left with an even number of boxes?
[10:03:58] <joshua> I kinda like the idea of having 2 secondaries with one of them on delay
[10:03:59] <Gargoyle> royh: Doesn't matter. It's not a problem having even number. What you need is a majority of the original config
[10:05:24] <royh> i guess i need to read up on how replicasets work
[10:06:51] <joshua> royh: Is your company going with a support contract or just going commando?
[10:07:05] <royh> joshua: commando :P
[10:07:58] <joshua> Just wondering cause if you have support they run through a health check and help advise you on how to set it up etc. for best practice.
[10:08:17] <royh> joshua: got a link. might be interested in that :)
[10:08:45] <royh> 10gen.com?
[10:11:09] <joshua> http://www.10gen.com/products/mongodb-subscriptions
[10:11:18] <royh> cool. thanks :)
[10:12:47] <joshua> Theres also the online course coming up soon https://education.10gen.com
[10:13:45] <joshua> Doesn't cost anything
[10:52:06] <wayland> Hi! Is there a good guide on how to install c++ drivers on freebsd?
[10:54:40] <algernon> I would assume that databases/mongodb-devel in ports does just that
[10:54:57] <ron> wayland: sure. download ubuntu. install it. then install the drivers. done.
[10:54:59] <ron> \o/
[10:55:12] <wayland> wow
[10:55:27] <algernon> or, since that got moved to Attic, databases/mongodb
[10:55:43] <scoutz> hi
[10:56:03] <chrisq> is there a document anyone can point me to that exaplins when to use collection and when to use databases in mongodb?
[10:56:08] <chrisq> explains
[10:56:42] <joshua> collections exist inside a database
[10:57:02] <scoutz> i have a replica setup with a primary,secondary and an arbiter however when i take down one of the sets it doesnt set a new master, do i need a mongos to do this?
[10:57:10] <chrisq> joshua: yes, that much i figured out, but is there a reason not to just put it all in one database?
[10:57:16] <chrisq> all collections that is
[10:57:56] <joshua> I think the reason they suggested against it was if you plan on sharding
[10:58:00] <chrisq> like can you do joins between collection in the same database, but not between collection in different databases?
[10:58:41] <kali> scoutz: nope, it should work with these 3
[10:58:55] <kali> scoutz: mongos is only required if you need to shard
[10:59:27] <wayland> algernon: databases/mongodb-devel have been removed from the ports collection, i was trying to install it form source using "scons mongoclient" but i'm not sure how only to install the client libraries without building the whole thing
[10:59:35] <joshua> chrisq: I just had it explained to me like a week ago and I forget already the logic behind it
[11:00:09] <ron> wayland: I'd suggest asking in the mailing list/forums.
[11:00:45] <algernon> wayland: try databases/mongodb
[11:01:14] <algernon> wayland: granted, that also installs the whole thing..
[11:01:41] <chrisq> joshua: ah well, i'll keep on googling
[11:02:27] <scoutz> kali: So that means in my application I would need to specify the address of all 3 mongod sets when connecting
[11:02:38] <kali> scoutz: it's better, but not required
[11:02:51] <NodeX> chrisq : you cannot join in mongo - period
[11:03:23] <kali> scoutz: the client will grab the whole replica set list of nodes once it managed to connect to one node
[11:03:50] <kali> scoutz: but if you specify one single node in your app, and this node is down, the connection will fail to open
[11:04:55] <chrisq> NodeX: not even mongo _ids?
[11:05:12] <NodeX> nothing
[11:05:15] <scoutz> kali: ok but if I wanted it to use the secondary/failover I would need to specify that as well right?
[11:05:38] <duraid> does anyone know if mongo supports "procedural documents" (sorry if that's a bad term)
[11:05:43] <NodeX> some drivers attempt to mimic the behaviour by doing 2 queries but it will allways be 2+ queries
[11:05:53] <NodeX> duraid : transactions?
[11:05:53] <wayland> algernon: ok, thanks… i'm going to try that
[11:06:04] <chrisq> NodeX: ok, so say i have 15 million songs, and i also have 1.5million albums, in this scenario mongodb is useless?
[11:06:16] <NodeX> useless for what?
[11:07:07] <duraid> nodex: what I mean by 'procedural document' is a way to have some code act as if it is a large number of documents
[11:07:19] <duraid> without actually *adding* those documents to the DB
[11:07:32] <NodeX> duraid : I dont know what that means sorry
[11:07:48] <chrisq> NodeX: http://docs.mongodb.org/manual/applications/database-references/ this explains it better, and seems to indicate that you can in fact do references between documents
[11:07:59] <duraid> nodex: *ding* i know, an example would help
[11:08:13] <duraid> nodex: this is a pointless simple example, but should explain what I mean
[11:08:28] <NodeX> chrisq : in your scenario an rdbms with joins is a bad idea.... an album NEVER changes once sold so I would store the album and the songs in the abum
[11:08:32] <NodeX> album*
[11:08:49] <NodeX> chrisq : You -can- reference but as I said it's 2 or more queries
[11:08:51] <duraid> nodex: imagine you have a mongoDB with a huge number of "people" documents that have a name and an age field
[11:09:20] <duraid> nodex: now suppose I have some code which can "parse names", e.g. given a name, work out a first and a last name
[11:09:22] <joshua> mongo has nested documents so you could have the songs inside the album in the same collection
[11:09:34] <NodeX> which is what I just said ;)
[11:09:42] <duraid> nodex: what i'm wondering is, what is the best way to take that database, and get a new one which has first and last name fields
[11:10:12] <duraid> eek bbi30
[11:10:15] <NodeX> a new database?
[11:10:42] <chrisq> NodeX: ok, thanks, in the music business all songs are tied to "releases" which might have 1 or more songs, in this case you'd have only one collection "releases" with all songs added to them?
[11:11:00] <NodeX> it would be an albums collection per your example
[11:11:05] <chrisq> all songs are probably included in quite a few releases though
[11:11:21] <chrisq> still thats an overhead we could deal with
[11:11:24] <NodeX> what is quieried more ... albums or songs?
[11:11:30] <chrisq> NodeX: songs
[11:11:35] <NodeX> for the speed you're better to dupe data than to query more
[11:12:05] <NodeX> fact : a song name doesn't change = no updates, fact: an album listing doesn't often (if ever) change = no updates
[11:12:15] <chrisq> NodeX: both true
[11:12:22] <NodeX> which boils down to no need to join ever
[11:12:33] <NodeX> 1 collection of albums with nested sogs
[11:12:36] <NodeX> songs*
[11:12:52] <joshua> db.releases.find({"release.song":"Shake your booty"}).pretty()
[11:13:06] <chrisq> NodeX: well coming to think of it, releases do change a bit, mostly rights or owners
[11:13:23] <NodeX> the updates are infrequent if ever
[11:14:14] <chrisq> NodeX: great thanks, i'll keep that in mind when we look at moving from the mssql server we use now :(
[11:14:32] <NodeX> if you track chart placements then you'll need a second collection
[11:14:53] <NodeX> and I would suggest an _id on each track in an album to track it
[11:14:58] <NodeX> track-> song
[11:14:59] <chrisq> there are in fact a few thousand updates a week, but i'm thinking thats not a huge number
[11:15:11] <NodeX> I do that an hour, it's no biggy
[11:15:16] <joshua> Yeah a unique identifier would be important if track names could have dupes that aren't the same song
[11:16:14] <chrisq> joshua: thanks for your input
[11:16:39] <chrisq> NodeX: i'm off to a good start, thanks again
[11:17:15] <NodeX> good luck, check back if you need more
[11:17:18] <joshua> chrisq: im kinda new to this and not a developer so don't take my word as gospel :)
[11:17:22] <chrisq> at first i'll be using mongodb for logging, but it seems to be a no-brainer, as it is used by so many logging tools already
[11:17:24] <joshua> http://docs.mongodb.org/manual/core/data-modeling/
[11:17:33] <NodeX> I use it for everything
[11:19:12] <joshua> The page has an example with book publishers, which is sort of similar to the album/song thing
[11:21:17] <NodeX> to be honest apart from some obscure grouping I have not found anything I really -need- SQL for in the last 2 years of using mongo
[11:25:16] <chrisq> joshua: souds just like what i need, i'll take a look at it
[11:42:10] <duraid> nodex: sorry, back
[11:43:37] <duraid> nodex: i think i'd better just sit down with mongodb for a while and see if I can poke it enough to do what I want ;)
[12:05:21] <aroj> hi
[12:05:32] <aroj> a question abt mongodb replication
[12:05:54] <aroj> is there a max latency requirement for mongodb replication in a replica set
[12:06:25] <aroj> in case the replication is happening across two data centres
[12:07:05] <aroj> like, a max latency of say 15ms or something?
[12:08:13] <NodeX> no
[12:08:15] <kali> i'm not aware of such an issue, and given the scenarios used as examples (transcontinental RS) i would assume there is not such a limitation
[12:08:40] <kali> what i do know is, the machines clocks needs to be sync to a ntp server
[12:08:55] <kali> or Bad Thing happen
[12:09:54] <_garbage_> kali: for example? Any link to read more about it?
[12:10:30] <kali> http://www.mongodb.org/display/DOCS/Data+Center+Awareness
[12:10:41] <kali> sf1, ny1... this suggests latency > 15ms
[12:10:58] <kali> uk1
[12:11:25] <kali> Asia, Africa, later on
[12:11:37] <aroj> thanks kali, let me read up this link.. seems useful
[12:18:02] <aroj> so seems like mongodb replication will work fine even with latency > 15 ms
[12:18:25] <aroj> is there an upper limit? say replication will fail if latency is greater than X ms?
[12:19:04] <Derick> no
[12:19:16] <Derick> it will only fail when you run out of recoverable space in your oplog
[12:19:57] <Derick> sorry, didn't read properly
[12:20:08] <Derick> but, network lag is fine... but your clocks still should be in sync
[14:15:48] <ehershey> is there something like a mongodb reference application?
[14:16:45] <ehershey> I want a good client tier for playing with db setups
[14:36:24] <NodeX> dunno what that is dude
[14:50:55] <kali> i think he means something similar to the "pet clinic" in the spring world
[14:55:01] <ehershey> I think so
[14:55:17] <ehershey> a sample application
[14:55:33] <ehershey> but ideally a little more comprehensive than 'sample' implies to me
[14:55:50] <ehershey> but probably yeah
[14:57:25] <ehershey> I will google with "sample" instead of "reference"
[14:59:54] <kali> ehershey: thing is... this is usually meant to demounstrate the ORM/ODM and upper layers
[15:24:39] <ehershey> I found a couple interesting apps that try to do that
[15:24:49] <ehershey> which would work for me if there was a good simple one
[16:25:25] <JakePee> What's the best way to approach indexing a large table that has a common query param
[16:25:56] <JakePee> That is, every query against a collection will have 'user_id'
[16:26:31] <NodeX> table?
[16:26:37] <NodeX> do you mean a collection?
[16:26:37] <JakePee> sorry, collection
[16:26:54] <NodeX> ensureIndex({user_id:1});
[16:27:05] <NodeX> db.collection.ensureIndex({user_id:1});
[16:27:21] <kali> and include user_id in all the other index you add
[16:27:32] <JakePee> ah, okey
[16:28:03] <kali> depending on the cardinalities, the user_id may actually be useless
[16:28:04] <JakePee> it's a very large table
[16:28:32] <NodeX> define very large?
[16:28:33] <JakePee> and i wasn't sure making 'userid' part of multiple indexes would be efficient
[16:29:05] <JakePee> 17 million rows
[16:29:20] <NodeX> there is a write up on compound indexes and how they can be re-used
[16:32:34] <JakePee> http://docs.mongodb.org/manual/applications/indexes/#use-compound-indexes-to-support-several-different-queries
[16:33:52] <JakePee> i'm looking to support queries { 'user_id' : val, 'fields.name': val}
[16:34:03] <JakePee> and { 'user_id' : val, 'fields.age': val}
[16:34:24] <JakePee> as well as a few others in the 'fields' subdocument
[16:38:50] <JakePee> the issue is that if i try doing db.coll.ensureIndex({'user_id' : 1, 'fields.name':1, 'fields.age': 1}), the query db.coll.find({'user_id':val, 'fields.age'}) doesn't get caught
[16:40:59] <JakePee> and if i do db.coll.ensureIndex({ 'user_id' : 1, 'http://fields.name/': 1}) and db.coll.ensureIndex({ 'user_id' : 1, 'fields.age': 1}), the 'user_id' field is replicated for each index
[16:41:22] <JakePee> whoops
[16:41:46] <JakePee> and if i do db.coll.ensureIndex({ 'user_id' : 1, 'fields.name' : 1}) and db.coll.ensureIndex({ 'user_id' : 1, 'fields.age': 1}), the 'user_id' field is replicated for each index
[16:42:34] <elux> hey guys
[16:43:00] <elux> ive been running mongodb for a little while, and its been fine.. but for whatever reason i no longer can run the mongo shell.. this is what happens: https://gist.github.com/cd1b81db44d4da0bf332
[16:43:10] <elux> any ideas..? did some dependent library get overridden..?
[16:49:01] <elux> pretty brutal i cant even open the mongo shell lol
[16:49:10] <elux> 2.2.2
[16:55:53] <elux> .........
[16:57:18] <NodeX> that appears on your screen?
[16:57:24] <elux> yes...
[16:57:35] <elux> ive tried reinstall mongodb completely.. everything associated to it..
[16:57:56] <NodeX> what operating system?
[16:58:10] <elux> amazon linux .. which is a fork off centos i believe
[16:58:42] <elux> mongo -h runs at least..
[16:58:59] <NodeX> looks like a javascript interpreter error to me, are you running V8 or spidermoney?
[16:59:06] <NodeX> spidermonkey*
[16:59:18] <elux> looks like v8
[16:59:24] <elux> i am assuming thats more unstable..
[16:59:44] <elux> ill try reinstall v8
[16:59:59] <NodeX> pretty sure you have to compile V8 support in
[17:00:24] <NodeX> try the latest spidermonkey libs
[17:00:25] <elux> i just installed from the package manager (yum) .. the server has been working fine... and im pretty sure the client did at one point too
[17:02:53] <elux> im going to compile mongo myself..
[17:03:09] <elux> thanks for the help
[17:03:16] <NodeX> ;)
[17:08:51] <JakePee> https://gist.github.com/3d24739c3f3e0742eab0
[17:09:00] <JakePee> sorry, wasn't articulating well earlier
[17:15:39] <elux> hrmm lots of mongodb-src-r2.2.2/src/third_party/boost/boost/date_time/gregorian/greg_day.hpp:20: undefined reference to `std::out_of_range::~out_of_range()'
[17:15:40] <elux> build/linux2/normal/third_party/boost/libs/thread/src/pthread/thread.o:/opt/mongodb-src-r2.2.2/src/third_party/boost/boost/date_time/gregorian/greg_day.hpp:20: more undefined references to `std::out_of_range::~out_of_range()' follow
[17:15:44] <elux> and then build failure.. any suggestions?
[17:17:16] <kali> elux: mismatching boost version s?
[17:17:39] <elux> cool.. ill try to remove any boost packages i have on my system
[19:20:00] <ekristen> anyone familiar with cloud foundry and how it implements mongodb?
[19:20:40] <NodeX> hopefuly the engineers @ cloud foundry :P
[19:28:59] <ehershey> want to play with cloud foundry
[19:29:01] <ehershey> but haven't
[19:37:17] <defunctzombie> can I do a findAndModify using tailing: true and awaitdata: true ?
[19:38:07] <zastern> Is it normal for rs.initiate to take a while? My db is empty.
[19:41:26] <ekristen> zastern: not really
[19:41:31] <ekristen> usually happens instantly
[19:41:40] <ekristen> at least in my experience
[19:42:23] <zastern> yeah in my tesing in virtualbox it happened instantly
[19:42:32] <zastern> on rackspace its not really working
[19:42:32] <zastern> hmm
[19:45:10] <zastern> I have no clue why it's taking so long
[19:45:54] <zastern> and load is 2, all from mongo. this makes no sense
[19:46:41] <JakePee> can you make sparse multikey indexes on an array
[19:47:43] <zastern> hmm, worked on three other servers
[19:48:03] <JakePee> if so, how does it act (do documents with empty arrays not create index entires on the sprase index)
[19:48:19] <JakePee> entries* sparse*
[19:52:27] <NodeX> what are yoiu trying to achieve?
[19:53:12] <JakePee> as small of a multi key index as possible on an integer array
[19:53:27] <NodeX> just index the array
[19:53:45] <NodeX> it's treated like any other field
[19:53:55] <zastern> i figured out my problem. im hitting the secondary nodes by fqdn but the primary is just putting its hostname into the config
[19:54:32] <JakePee> k, the field is set on every document, but it's an empty array for the majority of them
[19:54:50] <NodeX> it's fine, it will be treated as such
[19:55:09] <JakePee> sounds good, thanks
[19:56:12] <Lucretiel> Hey all question about mongo and sorts
[19:56:29] <Lucretiel> it says in the guide to use limit when you do a sort
[19:57:06] <Lucretiel> to prevent it from using too much memory
[19:57:42] <Lucretiel> How does that work?
[19:57:50] <Lucretiel> since the limit happens AFTER the sort?
[19:58:02] <NodeX> it limits the cursor size
[19:58:22] <Lucretiel> But it still has to run through the whole collection
[19:58:24] <Lucretiel> when sorting
[19:58:48] <NodeX> the index is already sorted on asc/desc
[20:00:02] <Lucretiel> if it isn't indexed
[20:00:18] <Lucretiel> oh wait- it used partial sort
[20:00:23] <Lucretiel> I forgot that that was a thing
[20:00:31] <Lucretiel> cool thanks all
[20:00:34] <Lucretiel> http://www.cplusplus.com/reference/algorithm/partial_sort/
[20:01:41] <NodeX> it wont sort to much without an index
[20:03:22] <Lucretiel> I mean, it'll sort without an index
[20:03:24] <Lucretiel> just slowly
[20:03:46] <kali> Lucretiel: it will refuse to do it if there is more than ~ 1000 docs
[20:03:53] <kali> Lucretiel: and no index
[20:05:54] <Lucretiel> because of the 32MB limit?
[20:05:55] <NodeX> ^^ better explination of what I was trying to say ;)
[20:05:59] <Lucretiel> gotcha
[20:10:26] <Lucretiel> well, I did it with 10,000 just now. They're just small test docs, though. With bigger docs I'm sure it's the same thing.
[20:44:21] <starburst> hello :-)
[20:45:31] <starburst> I have a mongo database called mongodb_production and a database called mongodb_development... what's the best way to copy contents from one DB to another? They exist on different servers (and networks). I can only scp/sftp between the two servers
[20:46:29] <starburst> for our mysql db we use mysqldump/mysql < blah.sql so I was wondering if there was an equivalent method.
[20:47:35] <rossdm> http://docs.mongodb.org/manual/reference/mongodump/
[20:52:16] <kali> alternatively: http://docs.mongodb.org/manual/tutorial/copy-databases-between-instances/
[20:52:24] <starburst> that only shows if it has the same db name
[20:53:02] <starburst> the databases are named differently mongodb_development and mongodb_production
[20:54:11] <kali> so what's the problem ?
[20:54:50] <starburst> if I backup mongodb_production (dbname) how would I restore it into mongodb_development
[20:55:07] <starburst> the dump/restore would move it as mongodb_production on my development server
[20:55:40] <kali> look at mongorestore arguments... <path> and -d should help you
[20:56:05] <starburst> ah that's what I was looking for
[20:56:22] <starburst> didn't see the --db option for restore... I need new glasses.. thanks
[20:58:59] <zastern> ekristen: it takes a while after doing rs.init() because it's doing this - https://gist.github.com/4447183
[20:59:20] <kali> you need a better file system
[20:59:29] <kali> or disable preallocation if this is a development setup
[21:00:06] <kali> http://www.mongodb.org/display/DOCS/Excessive+Disk+Space#ExcessiveDiskSpace-DatafilePreallocation
[21:12:03] <Lucretiel> New question about mongo sorting
[21:12:18] <Lucretiel> if only some of the documents have the field I'm sorting
[21:12:39] <Lucretiel> they appear first (if ascending)
[21:12:44] <Lucretiel> is there a way to make them not appear
[21:12:46] <Lucretiel> or even better
[21:12:53] <Lucretiel> make them appear at the end?
[21:50:42] <ephesius> is there an easy way to find documents with non null values?
[21:58:08] <Lucretiel> null as in json nil, or null as in doesn't exist?
[22:08:47] <Lucretiel> is there a soft set operator in mongo update?
[22:09:02] <Lucretiel> sets only if the field doesn't exist?
[22:09:16] <lilred> hey guys, I'm new to MongoDB, I'm wondering what driver/layer/wrapper I should use to connect from Node.js
[22:09:34] <lilred> I looked at Mongoose but I can't figure out how to normalize data in Mongoose
[22:34:03] <rekibnikufesin> Lucretiel: use $exists
[22:34:50] <rekibnikufesin> as in> db.collection.find( { "fieldIwant": { $exists: true } } )
[22:51:07] <Lucretiel> I need to update it, though
[22:51:12] <Lucretiel> so like
[22:51:25] <Lucretiel> if I have a document {'x': 1}
[22:51:52] <Lucretiel> and I do update{..., {'$softset',
[22:52:20] <Lucretiel> update{..., {'$softset': {'x':2, 'y':3}})
[22:52:39] <Lucretiel> it should end up as {'x': 1, 'y': 3}
[23:19:02] <_aegis_> Lucretiel: how atomic do you need to be?
[23:28:57] <Lucretiel> I'd like it to be one operation
[23:29:02] <Lucretiel> I can see how to do it in two
[23:29:22] <Lucretiel> I'm in python, so I can read-update-write
[23:31:02] <rekibnikufesin> something like this will update all documents that have an 'x'
[23:31:03] <rekibnikufesin> > db.test.update( { x: { $exists: true } }, { $set: { x: 2, y: 3} }, {upsert: true, multi: true } )
[23:31:23] <rekibnikufesin> is that what you're looking for?
[23:31:43] <rekibnikufesin> or only update if x exists and is = 1?
[23:51:38] <Lucretiel> it's the opposite
[23:51:47] <Lucretiel> only update fields that don't exist