[03:57:36] <joannac> unless you're rolling your own _id in which case, you need to keep track of insertion time
[03:59:34] <Astraea5> I'm not. Thank you for the answer.
[03:59:38] <Astraea5> I see that this takes advantage of the sequential nature of default _id assignment. If I were inserting a large number of documents with different dates, like lets say customer birthdates, what should I read to learn how to do that?
[04:00:48] <Astraea5> I've been reading about indexes, but I don't understand when they are created. Do I create an index when setting up the database, and then continue to use it ever after? Or are they created every time I execute a find()?
[04:40:16] <ashley_w> how do i do db.fsyncUnlock() in perl or node? i can lock the db just fine from either, but can't figure out how to unlock it
[06:36:27] <svm_invictvs> I am doing a collection.insert with one object. The object is inserted, no error is reported, but it also reports that no rows are affected.
[07:26:10] <svm_invictvs> RaviTezu: There's a bug elsewhere in my code, but it looks like the indicator for number of documents affected does not get set to "1" when inserting a single document.
[07:28:13] <RaviTezu> svm_invictvs: glad that helped
[07:28:26] <svm_invictvs> Honestly, I don't nkow why I just ignored the obvious approach.
[07:32:37] <RaviTezu> btw, in case you're not aware of: findOne will return you a single doc.
[07:44:35] <Garo_> I'm about to build a new replica set for two big collections in two big database (one collection per database as it eases locking issues). I currently have one replica set behind mongos with config servers. I have the option to deploy this new replica set into the same cluster, or deploy a new replica set without clustering (mongos and cfg servers), or deploy a new cluster with own mongos processes and own config servers. ...
[07:44:41] <Garo_> ... We haven't used actual sharding capabilities (splitting a collection/database between multiple replica sets as we've had problems with it in the past and we have done fine wihtout it since then). Any opinions which is the best way?
[09:59:05] <durre> did anyone read this article and have some comments? http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/ ... so the bottom line was "mongodb sux for social applications"
[10:03:24] <Nodex> it doesn't claim to be a silver bullet, all these hipsters who think it's trendy to bash the DB crack me up because they think it's a drop in replacement for an RDBMS
[10:04:34] <Nodex> [09:24:38] <Nodex> shocking, yet another person who cannot grasp how to model data properly
[10:05:22] <x0f> Nodex, the title is misleading, i think it's controversy-catcher. she actually has some valid arguments and even points out strong suits for MongoDB, but also it's weaknessess.
[10:06:06] <Nodex> as I said, she seems to base it all on the premise that mongodb is a silver bullet drop in replacement for an RDBMS
[10:06:28] <Nodex> it has never claimed this, anyone who thinks they can develop a social network using one type of database is crazy
[10:07:04] <x0f> Nodex, as many did, when it hyped. so who's to blame? i guess your "local" community idol than.
[10:14:05] <Nodex> x0f : it's pretty obvious that it's not suitable for all kinds of apps, if people can't see that then they don't deserve to call themselves developers
[10:15:50] <x0f> also, thinking that you can just copy-paste your RDBMS model into MongoDB and be done is so mind-boggling stupid, i don't even now where to start.
[10:16:38] <x0f> yeah, that works for your 1 post, 3 comments a day blog, but that's it.
[10:19:40] <durre> too bad I'm working at a "social platform" with mongodb :)
[10:50:24] <Nodex> durre : You should look at other options for graph like data
[10:57:32] <MaxV> hey, tad confused by something in the docs, wondering if it might be a mistake... http://docs.mongodb.org/manual/core/replica-set-oplog/#oplog-size
[10:58:26] <MaxV> The bullet point about 64-bit Linux etc default sizing of the oplog states "If this amount is smaller than a gigabyte, then MongoDB allocates 1 gigabyte of space."
[11:00:56] <MaxV> so it allocates more than the free dish space available? that sounds very wrong..
[11:02:07] <MaxV> oh wait, nevermind, it makes sense on the 5th reading
[11:08:51] <Mez> I'm looking to do something like db.sessions.find({expires: {$gte: <<timestamp>>}}); - but putting in the timestamp. Is there a way to do this ? (with Mongo, not with something external)
[11:59:06] <RaviTezu> Hi, use local, db.oplog.rs.find().count() is showing a "768000" and a newly added rs member is giving "1".. and the dbs are having less size the dbs on the primary.
[11:59:16] <RaviTezu> any help? what went wrong here?
[11:59:40] <RaviTezu> "768000" on primary and old secondaries
[12:00:05] <RaviTezu> the dbs are having less size than the dbs on the primary.
[12:17:04] <kali> if mongodb is to nosql what mysql is to sql... i'm not sure that so bad
[12:17:11] <kali> i wonder if it's not flattering actually
[12:21:13] <dawik> does anyone know if the C driver/client api has changed (if so how much) between 0.6 and 0.8 (current)?
[12:36:52] <RaviTezu> Initial sync is not copying all the db data from primary node. any help? and i see oplog.rs count on primary is a big number and oplog.rs count is 1 on new node.
[12:37:15] <RaviTezu> can i take this as ..oplog has been rotated?
[13:21:57] <dawik> algernon. thanks, yeah i found the older docs to it and had to ake my own makeshift Makefile now it seems to build \o/
[13:24:04] <RaviTezu> kali: thanks :) and just a note, i have old secondary node on this replica set.. and this is having same sizes as the primary node.
[13:24:04] <RaviTezu> This old secondary was added to replica set using rs.config() along with the current primary.
[13:24:35] <RaviTezu> it been like 1 month ... and now ..I'm adding this new node to the replica set.
[13:24:47] <kali> RaviTezu: it make sense, if it is older, it is likely to have a fragmentation ratio similar to the primary
[13:25:11] <kali> RaviTezu: it's like C14 datation :P
[13:25:24] <tomasso> why could that be that when i run this query: db.stops.find( { 'location' : { $near : { $geometry : { "type" : "Point", "coordinates" : [ -58.417016, -34.603585 ] } , $maxDistance : 50000 } } } ) it doesn't return any results, nor give me any error message? The index does exist in the collection, and it contains a location field, with the GeoJSON Point format.
[13:34:25] <taf2_> should i be adding the replica set to the shard data servers or the config servers?
[13:34:44] <kali> tomasso: "location" can not work, that's for sure
[13:35:28] <tomasso> also the index does exist, shouldnt it have returned an error?, i will create another not for location, but for stop.info.location
[13:35:40] <kali> tomasso: also, the index must be on "stop.info.location" not this
[13:37:04] <dawik> unfortunately im not in a position to change the dependencies, its part of a larger system. but out of curiousity, which one is that?
[14:33:15] <pebble_> Yellow. I have a setup on EC2 with EBS and replica sets on different hosts.I'm thinking of moving the journal and oplog to a different EBS drive on the primary host
[14:36:58] <pebble_> I suppose what I want to ask is , is there going to be overhead with having the db and the journal/oplog on different disks on EBS (which talks over the network)
[14:50:55] <taf2_> just added a replica to an existing sharded server… watching the logs the new replica is in an almost constant state of startup2
[15:10:30] <Derick> you need to check the HTTP headers
[15:11:58] <taf2_> the biggest issue i had was when i reconfigured to include the new replica it had not finished updating the new replica and i made the mistake of restarting the current primary… this caused it to become a secondary while the new replica was in startup2 phase… eventually to fix this i had to reconfigure to remove the startup2 replica and now the original replica is master primary and the app is happy
[15:19:36] <kfb4> I'm trying to avoid resizing issues since i've got a very large dict in the docs
[15:21:35] <kfb4> basically, i'm keeping a top 500 map of {key: count} and I don't know if it's better to overwrite the whole map at once (i.e. {$set: {top_500: <value>}}) or to update it like {$inc: {top_500.foo: 3}, {$unset: {top_foo.bar: 1}}, {$set: {top_500.baz: 50}}.
[15:21:48] <kfb4> Is one of those going to be substantially better or worse than the other?
[15:22:23] <kfb4> Cause i don't know how document sizing/resizing will treat constantly dropping/adding fields vs just overwriting as a block.
[15:28:56] <kfb4> anyone have thoughts on it? i've seen disk usage spike and i'm trying to figure out if this is the root cause
[16:39:07] <rafaelhbarros> I'm gonna say something very wrong, please forgive me.
[17:11:33] <Derick> you only need to add the port if it's not the default 27017
[17:16:02] <davi1015> I am looking to scan for chained replica in my clusters ( I have a lot). However looking at the docs, it doesnt seem to list if syncFrom would be an added field on chained secondaries or how you would go about determining if you are currently chained.
[18:34:47] <chaotic_good> when slave is forced to disk?
[18:53:05] <gbhatnag> hi all - any reason why saving a new model (using mongoose's asynch model.save()) in a recursive method would not work? I'm having an issue where it's called thousands of times and nothing gets saved. When used outside of the recursive function with a few nodes, it works fine...
[18:55:57] <gbhatnag> here's the code: http://pastebin.com/L3vFkdZX
[19:42:32] <odin_mighty> would like to know how do i start off with learning mongodb
[19:42:38] <odin_mighty> and this whole noSQL thingy
[20:19:55] <steckerhalter> I always get "connection closed" using the native nodejs driver. I also tried the suggested tip from the FAQ with the timeout but it still happens quite often. ideas?
[20:20:22] <steckerhalter> next step would probably to try mongoose
[20:29:42] <stu_> Could anyone help me with http://stackoverflow.com/questions/19937499/mongodb-bean-compositedata-retrival ?
[20:48:12] <redondo> hi all! There are important differences between mongodb's document-oriented model and redis's key-value store aproch?
[20:51:57] <kali> redondo: actually, either this article has been changed in the last few month, or i am confusing with something else
[20:52:29] <kali> redondo: it's less good than it was in my min
[20:53:17] <redondo> I ask again: if we say that "documents" are BSON (binary JSON) doesn't that imply that "documents" are key-values data structures?
[20:54:32] <kali> the point is what the database is aware of, and how the data can be accessed
[20:54:56] <jyee> redondo: i guess it depends on how you define key-value data stucture.
[20:56:26] <jyee> i mean, you could argue that for the vast majority of schemas, mysql is a key-value data store on a table level
[20:56:41] <redondo> jyee, my definition does not go beyond from what a JSON object is, which I cannot see really different from python dicts, or any key-value data structure, like mongodb's "document". Am I wrong?
[20:57:40] <jyee> redondo: you're not wrong, but saying that redis and mongo are both key-value is a bit misleading for the question
[20:58:43] <redondo> jyee, could you give me light on the differences between both models?
[20:58:55] <kali> redondo: a document is similar in role to a record in a sql database, and fields are similar to column. a python User class would be mapped to a collection, an instance to one single document
[21:00:16] <kali> redondo: in a k/v, the granularity is different: your user store would span a few numbers of k/v tables, each k/v atom would be a property value
[21:00:46] <jyee> redondo: i've haven't used redis, but as far as i know, it stores string values and variations of, so one major difference (again from my very limited exposure to redis) is that it can't do multiple levels of nested objects (or key-value pairs).
[21:01:02] <jyee> but i'd be happy to know if i'm wrong
[21:01:25] <kali> jyee: nope, that's about right. or else it is a blob for the database
[21:01:57] <kali> nothing prevent storing a complex json or json like structure in a redis value, but for redis it's opaque
[21:02:44] <kali> then of course, redis is a bit more clever than a brute k/v store with a few specific data structure, so it exceed the strict k/v store scope
[21:03:20] <kali> and mongodb exceed a strict document store scope too but in different planes or directions
[21:07:48] <kali> redondo: i'm sorry, i wont get into this kind of discussion on irc. please check out the doc, and look at the mongodb api and redis api. the difference are obvious
[21:10:05] <redondo> Sure the differences are obvious, for those who can see them. I just wanted to have a little background to continue diving into docs and apis.
[21:13:08] <erg> hey, i have a mongoose query with find that i copy/pasted into another file. in the first case, it's returning objects, in the second it's returning a doc and i have to call toObject() on each result. it's confusing me why this could be the case. any idea?
[22:05:53] <padan> rather new to map/reduce stuff. thought something like this would work, but it returns nothing... says there were 0 emits. Can someone take a peek? JS for map/reduce: http://pastebin.com/39NKk6Y3 ... class I am storing: http://pastebin.com/GVfMZbX7 ... I can use normal find methods and find documents that meet the criteria
[22:09:35] <chaotic_good> and simple data structures
[22:11:29] <padan> Unless there is a really good reason to not use the simple data structures I've got in place, I'd prefer to not rearchitect the solution.
[22:11:46] <padan> kali - odd, i'm not finding one now. let me look at that. i know it was there not long ago...
[22:11:52] <kali> i think you can safely ignore that chaotic_good comments :)
[22:12:09] <kali> padan: well, that would explain a few things, right
[22:12:36] <padan> i suppose i should be happy that mongo doesn't make things up
[22:28:12] <chaotic_good> use simple data structures and algos
[23:08:28] <fredfred> all potential bugs should be reported in jira?
[23:29:20] <dangayle> Hi. I have a friend who tells me he's having issues with duplicate content in his mongodb. Are there some low-hanging fruit a dev should be on the look out for?
[23:30:15] <dangayle> I run our local user group, so I get to field all these questions :)