[02:36:34] <jaccob> when you use .find in the shell, there is no mention of bson, but when you use mgo, the golang driver, it explicitly marshals/unmarshals bson so I was confused
[02:37:18] <sflint> desmondHume : it is horrible to use skip and limit...unless you have to
[02:44:10] <jaccob> you just pass your json to bson.M()
[07:12:39] <salty-horse> hey. I have a dump file I created with "mongodump -o -". How do I feed it into mongorestore? mongorestore seems to work on directories, but I only have a single file
[07:19:01] <salty-horse> joannac, nope. seems like I had to rename it to end with ".bson". this really isn't documented. had to look at the source for restore.cpp
[08:25:18] <remonvv> Trying to replicate some 2.4 vs 2.6 performance issues.
[08:26:05] <remonvv> "not work" = no performance difference
[08:26:17] <remonvv> Which might be expected given the recent changes actually.
[09:08:04] <jaccob> anyone using go? anyone use gmo? I tried: query := c.Find(bson.M{}, bson.M{"_id":0}); but I got error: too many arguments in call to c.Find
[09:25:59] <d0x> Hi, my document has a field called "status". Is there a way to aggregate the highest ammount of consequent status occurrencies when the collection is ordered by date?
[09:27:23] <d0x> The result should look like this: { status : "ERR", maxConsequentCount : 1337 }, { status:"SUC", maxConsequentCount : 1000000}, ...
[09:37:27] <remonvv> d0x: I cannot think of a practical way to do that within AF. Not sure if it's possible.
[09:38:43] <remonvv> Perhaps with an $cond operator you can reset a counter as a first stage, then do a second stage to $max it
[09:51:43] <piotrkowalczuk> hi guys, i read this art: http://blog.mongodb.org/post/87200945828/6-rules-of-thumb-for-mongodb-schema-design-part-1
[09:52:07] <piotrkowalczuk> and my question is what has better performance
[09:52:26] <piotrkowalczuk> query with $in from one to many example
[09:53:11] <piotrkowalczuk> or find({host:host._id}... from one to squillions
[10:47:37] <d0x> Having smth. like this: { id : 1 },{ id : 2 },{ id : 3 },{ id : 4 },{ id : 5 },… Is it possible to group them in groups like 1-3, 2-4, 3-5, 4-n?
[10:48:49] <d0x> Its every time from "$id" to "$id + 2"
[10:53:59] <blitzm> Hi, i have a problem with mongoDB connections. We are 6 developers that use the same configuration in a vagrantbox for a php project with mongoDB. Only one of our developers has the problem, that he gets the error „Connection terminated by foreign host“ instantly. on refresh everything may work fine or he gets the error over and over again. In the mongoDB logs I only see an entry when he got connected properly. Can someone give me a hint please how
[10:53:59] <blitzm> debug. (the other 5 devs with exactly the same drivers, the same connection sepped on the same network have no problems at all.
[12:09:51] <jaccob> when I do a find() in the shell it limits the results, how can I increase it?
[12:10:26] <sorribas> jacob you can type 'it' for more results afterwards.
[12:10:40] <rasputnik> anyone have a procedure to clone a replica set? need to back port a prod. db to a dev. environment. doesn't have to be realtime, just consistent - so i'm wondering if i can get the data from one of the secondaries and ship that somehow
[12:11:54] <rasputnik> jaccob: DBQuery.shellBatchSize = N before the query
[13:06:34] <tadeboro> Looks like so client-side magic to the rescue.
[13:06:42] <Derick> at our latest hackaton I hacked it in
[13:07:53] <tadeboro> Derick, nice. But unfortunatelly I'm stuck with 2.4 for now.
[13:08:35] <rasputnik> where does a replica sets configuration live? i want to copy databases only from one rs -> another, but don't want the new rs to think it's part of the old one.
[13:12:28] <rspijker> rasputnik: The configuration for a replica set is stored as a document in the system.replset collection in the local database.
[13:13:19] <rspijker> since replication isn’t on a DB level, but on an instance level. You can copy over a DB without having to worry about RS confusion
[13:15:12] <rspijker> _boot: pretty sure you can’t...
[13:15:22] <rasputnik> rspijker: cool, so all three members essentially have a copy - i.e. if i took datafilee from one production replica member, i could in theory load all preprod replica set members with those files?
[13:16:03] <rspijker> be aware of copying data from an active system though…
[13:16:21] <rspijker> best to bring a secondary down, or lock it, before you start the copy and then bring it back in after you’re done
[13:16:40] <rspijker> otherwise there are no guarantees about the files being consistent or even valid
[13:17:06] <_boot> is there any reason I'd see "estimated data per chunk : 91.53MiB" on a shard with chunksize set to 64?
[13:17:53] <rasputnik> rspijker: no that's fine, we'd shut down the detached replica member before shipping the files
[13:18:13] <rasputnik> basically we've split one huge db into smaller ones, just need to back port that to preprod
[13:19:07] <rasputnik> does mongo scan it's dbdir for datafiles on startup, or would i somehow have to tell it about the new databases?
[13:19:28] <rspijker> _boot: chunksize isn’t an exact science. There are loads of rules that govern splitting. Could be your shardkey choice. What if there are only 2 values for the shardkey. It can’t ever split it in more than 2
[13:20:13] <rspijker> rasputnik: fairly sure it just checks the dbpath
[13:20:23] <rspijker> specifically the .ns file will be checked
[13:20:34] <rspijker> but that’s separate per db, so should be no problem if you just copy everything
[13:21:35] <rasputnik> rspijker: cool - my other plan was to create 'stub' dbs in preprod to be sure mongo knew about them, then shut it down and overwrite the dbname.N and dbname.ns files for that DB with the prod. data
[13:22:25] <_boot> hmm, the shard key is based on a date (it's the only thing we can guarantee to be in the queries with the application)
[13:22:41] <rasputnik> just trying to work out what order to bring the replicas up that way. guess stop all 3, then start up primary, then a secondary at a time.
[13:22:42] <_boot> there should be lots of different values
[13:23:25] <rspijker> _boot: was just an example. There can be many reasons, like I said. Splitting is governed by a bunch of riles
[13:23:44] <rasputnik> thanks for the second pair of eyes anyway, rspijker
[13:24:10] <rspijker> rasputnik: on preprod? Shouldn’t really matter… I would stop all 3. Then copy over the data to all 3. Once that’s done, just bring em up one at a time...
[13:24:19] <rspijker> ‘primary first’ makes little sense
[13:24:33] <rspijker> since the primary node is ‘dynamic’ anyway ;)
[13:24:33] <rasputnik> rspijker: yeah i guess first guy awake is the primary :)
[14:24:22] <klevison> nowadays, we have a database sql server..
[14:24:51] <klevison> and we pretend migrate to mongoDB or other noSQL
[14:25:10] <rspijker> whether something is the ‘best choice’ for you is really not something that can be answered based on so little information. Nor on any information, really… If you really want to know, you’ll have to test several alternatives with your specific use-case and load
[14:25:13] <klevison> we need performance and easily polygons's based queries
[14:41:37] <saml> good thing is that you can implement more advanced and tricky data integration logic, not just in terms of relational database schema
[14:41:38] <klevison> saml, what is the best advantage of mongodb? what is it propuse?
[14:42:36] <saml> of course in the end, had to implement structure integrity stuff inside REST API
[14:42:47] <saml> so some might argue that it's not really a strong point
[14:43:00] <Derick> hmm, you do need to think about your data structure, but it's a lot more flexible to change later - so great for prototyping.
[14:43:15] <saml> but it's really easy to get started. and.. though i never dealt with big data, i heared mongodb doesn't even sweat with terabytes of data
[14:43:25] <remonvv2> flexibility is nice after the prototyping phase as well
[14:51:11] <remonvv> You know what's interesting; even at the most recent Amazon AWS Summit people are still claiming that SQL is the best starting point for beginning database engineers/developers.
[14:51:40] <remonvv> I struggle to find motivation for that viewpoint other than perhaps "It's the most mature database tech"
[14:57:57] <saml> in my experience, when it comes to collaboration with other developers, database schema and programming language type system are miles better than no schema and dynamic language
[14:58:24] <saml> "better" as in subjective feeling
[14:58:25] <baegle> Does anyone have any idea how this might happen and how to remove it: {"grade", "level" : { }}
[17:01:19] <jaccob> I have, here a sample of db.stoptimes.find(); { "stopid" : 2425 }, then I try: db.stoptimes.find({ "stopid": { $gt: "2400", $lt: "2500" } } ); but I don't get any results
[17:01:45] <jaccob> oh, I passed a string, instead of int
[17:05:54] <jaccob> saml, nevermind, it works, thx again
[17:08:37] <Zelest> http://pastie.org/pastes/9444840/text?key=dpf8gxh93qce9niultoiiw .. that's 1 second of my mongod.log and I'm using php-fpm, which I assumed used persistent connections.. How come it does all the connect/disconnects?
[17:31:31] <Zelest> I don't mind tiny adjustments that might blow up and force me to fix it real quick.. but doing heavy debugging on a machine that serves thousands of requests each sec.. sounds nasty :P
[18:01:28] <Zelest> off-topic as hell, but yesterday I learned that nginx is rather dodgy about If-Modified-Since.. the date must be exact to the Last-Modified for it to reply 304... :o
[18:10:09] <klevison> can anyone recomend me a GUI to acesss a local mongo database?
[18:10:15] <Zelest> Derick, is it normal for the replicaset members to connect to eachother every now and then even though they're already "linked" ?
[18:31:03] <Zelest> Derick, added a file (jpeg) using the MongoBinData class.. how do I read it back? Tried echoing it "as it is" and __toString(), still no luck.. :o
[18:34:21] <Zelest> Derick, nvm... $f['content']->bin did the trick :)
[19:19:23] <mango_> How do I delete an empty database from MongoDB?
[19:19:40] <mango_> I tried db.dropDatabase(databasename) but it just emptied it,
[19:19:50] <mango_> the database name still exists when I type show dbs
[21:51:16] <lkannan> Hey guys! I am reading a bunch of blog posts about oplog in mongo and I am unclear if it is even recommended for apps to consume oplog directly. I just feel like it is a bad idea to do depend on oplog. Just wanted to check with the experts.
[21:52:03] <stefandxm> i would say its very dumb yeap
[21:52:40] <stefandxm> and at mongodb world there were alot of talks about the op log / demos of how to consume it
[21:53:14] <stefandxm> imo oplog should be an implementation detail that should be considered as a "as-is" and subject to change even in minors
[22:06:23] <lkannan> Yeah, these blog posts suggest you *can* build your app to depend oplogs. I just can't reconcile with it. Are there any mongo devs who can help?
[22:09:36] <stefandxm> there were some crazy demos at mongodb world, were they mirrored a mongodb database with sql and what not
[22:09:55] <stefandxm> imo a freakshow but.. they were there on behalve of mongodb
[22:10:05] <stefandxm> so i guess they wont change the oplog in a minor at least
[22:12:29] <lkannan> heh. The perils of trying to understand a new NoSQL datastore at every other job.
[22:14:10] <stefandxm> luckily i am not depending on it in that way
[22:14:16] <stefandxm> but.. i was a bit flabbergasted :D