PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 12th of November, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:44:58] <Astraea5> I'm trying to find() the most recent N documents in a collection, but I'm stumped, and my searches are coming up bumkis.
[03:45:13] <Astraea5> Can anyone point me in the right direction?
[03:56:46] <joannac> db.coll.find().sort({_id:=1}).limit(N)
[03:56:58] <joannac> oops, without the =
[03:57:05] <joannac> db.coll.find().sort({_id:-1}).limit(N)
[03:57:36] <joannac> unless you're rolling your own _id in which case, you need to keep track of insertion time
[03:59:34] <Astraea5> I'm not. Thank you for the answer.
[03:59:38] <Astraea5> I see that this takes advantage of the sequential nature of default _id assignment. If I were inserting a large number of documents with different dates, like lets say customer birthdates, what should I read to learn how to do that?
[04:00:48] <Astraea5> I've been reading about indexes, but I don't understand when they are created. Do I create an index when setting up the database, and then continue to use it ever after? Or are they created every time I execute a find()?
[04:40:16] <ashley_w> how do i do db.fsyncUnlock() in perl or node? i can lock the db just fine from either, but can't figure out how to unlock it
[06:36:27] <svm_invictvs> I am doing a collection.insert with one object. The object is inserted, no error is reported, but it also reports that no rows are affected.
[06:36:35] <svm_invictvs> er, s/rows/documents
[07:02:03] <RaviTezu> svm_invictvs: do you see the inserted docs in the collection?
[07:12:30] <svm_invictvs> RaviTezu: That would be a good start at diagnosing the problem, eh?
[07:21:20] <svm_invictvs> RaviTezu: I call findOne after I insert the object, but I only get the id back out
[07:25:42] <svm_invictvs> RaviTezu: Thanks, dude.
[07:26:10] <svm_invictvs> RaviTezu: There's a bug elsewhere in my code, but it looks like the indicator for number of documents affected does not get set to "1" when inserting a single document.
[07:28:13] <RaviTezu> svm_invictvs: glad that helped
[07:28:26] <svm_invictvs> Honestly, I don't nkow why I just ignored the obvious approach.
[07:32:37] <RaviTezu> btw, in case you're not aware of: findOne will return you a single doc.
[07:33:59] <svm_invictvs> yeah
[07:34:08] <svm_invictvs> Which it was doing
[07:34:39] <svm_invictvs> I was basically passing it the { "_id" : <the_id_of_the_object_i_just_inserted> }
[07:34:53] <svm_invictvs> But all I was getting back was { "_id" : "the_id"}
[07:35:27] <svm_invictvs> But I just noticed an error in my code actually is actually wiping all the data out of the document just before inserting.
[07:39:23] <RaviTezu> so, your docs have "_id" field mentioned?
[07:39:49] <svm_invictvs> Yeah
[07:39:51] <svm_invictvs> I found the problem
[07:39:57] <RaviTezu> mongodb creates it automatically, if you don't have "_id" field in the doc.
[07:39:58] <RaviTezu> ok
[07:40:57] <svm_invictvs> Yeah, I nkow it does
[07:41:05] <svm_invictvs> Is it documented how it generates it?
[07:41:11] <svm_invictvs> Will it always be sequential or something?
[07:42:44] <svm_invictvs> RaviTezu: Let the record show that I probably was high when I wroe this code, man it makes no sense.
[07:43:51] <RaviTezu> it generates by using timestamp,host name..etc
[07:44:09] <RaviTezu> and http://docs.mongodb.org/manual/reference/object-id/
[07:44:15] <RaviTezu> may help you
[07:44:35] <Garo_> I'm about to build a new replica set for two big collections in two big database (one collection per database as it eases locking issues). I currently have one replica set behind mongos with config servers. I have the option to deploy this new replica set into the same cluster, or deploy a new replica set without clustering (mongos and cfg servers), or deploy a new cluster with own mongos processes and own config servers. ...
[07:44:41] <Garo_> ... We haven't used actual sharding capabilities (splitting a collection/database between multiple replica sets as we've had problems with it in the past and we have done fine wihtout it since then). Any opinions which is the best way?
[09:11:31] <themapplz> anybody awake?
[09:23:55] <Nodex> http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
[09:25:53] <Nodex> shocking, yet another person who cannot grasp how to model data properly
[09:50:55] <BurtyB> Nodex, lol "don't ever use it becuse we got it so wrong" ... for my use it's working well :)
[09:51:40] <RaviTezu> +1
[09:59:05] <durre> did anyone read this article and have some comments? http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/ ... so the bottom line was "mongodb sux for social applications"
[10:03:24] <Nodex> it doesn't claim to be a silver bullet, all these hipsters who think it's trendy to bash the DB crack me up because they think it's a drop in replacement for an RDBMS
[10:04:21] <Nodex> haha
[10:04:30] <Nodex> [09:22:40] <Nodex> http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
[10:04:34] <Nodex> [09:24:38] <Nodex> shocking, yet another person who cannot grasp how to model data properly
[10:05:22] <x0f> Nodex, the title is misleading, i think it's controversy-catcher. she actually has some valid arguments and even points out strong suits for MongoDB, but also it's weaknessess.
[10:06:06] <Nodex> as I said, she seems to base it all on the premise that mongodb is a silver bullet drop in replacement for an RDBMS
[10:06:28] <Nodex> it has never claimed this, anyone who thinks they can develop a social network using one type of database is crazy
[10:07:04] <x0f> Nodex, as many did, when it hyped. so who's to blame? i guess your "local" community idol than.
[10:14:05] <Nodex> x0f : it's pretty obvious that it's not suitable for all kinds of apps, if people can't see that then they don't deserve to call themselves developers
[10:14:05] <Nodex> BurtyB : ++
[10:14:57] <x0f> Nodex, yeah well, sadly development decision aren't always developer driven ;)
[10:15:27] <Nodex> they should be
[10:15:50] <x0f> also, thinking that you can just copy-paste your RDBMS model into MongoDB and be done is so mind-boggling stupid, i don't even now where to start.
[10:16:38] <x0f> yeah, that works for your 1 post, 3 comments a day blog, but that's it.
[10:19:40] <durre> too bad I'm working at a "social platform" with mongodb :)
[10:50:24] <Nodex> durre : You should look at other options for graph like data
[10:57:32] <MaxV> hey, tad confused by something in the docs, wondering if it might be a mistake... http://docs.mongodb.org/manual/core/replica-set-oplog/#oplog-size
[10:58:26] <MaxV> The bullet point about 64-bit Linux etc default sizing of the oplog states "If this amount is smaller than a gigabyte, then MongoDB allocates 1 gigabyte of space."
[11:00:56] <MaxV> so it allocates more than the free dish space available? that sounds very wrong..
[11:02:07] <MaxV> oh wait, nevermind, it makes sense on the 5th reading
[11:08:51] <Mez> I'm looking to do something like db.sessions.find({expires: {$gte: <<timestamp>>}}); - but putting in the timestamp. Is there a way to do this ? (with Mongo, not with something external)
[11:08:57] <Mez> like MySQL's NOW()
[11:09:17] <Nodex> no
[11:19:25] <durre> Mez: db.sessions.find({expires: {$gte: new Date().getTime()}})
[11:24:38] <RaviTezu> Hi, I have added a node to existing replica set(which has 2 normal nodes and 1 arbiter) using rs.add("server_name:port").
[11:24:39] <RaviTezu> But, when I do show dbs on new rs member.. it is showing less size(7G) for a db.. which is actually 20G on old memebers
[11:24:42] <RaviTezu> any help?
[11:27:10] <Mez> durre: durre that doesn't seem to work :( (I'm looking for something like 1384253597 - aka epoch time)
[11:29:50] <durre> Mez: and if you run only new Date().getTime() in the mongo-shell, what do you get?
[11:29:50] <Nodex> new Date().getTime()/1000
[11:30:08] <Nodex> you will need to set it as a variable I think
[11:30:12] <Mez> yeah - just spotted that :)
[11:30:21] <Mez> > db.sessions.count({expires: {$gte: new Date().getTime()/1000}, data: /s:4:"user";s:4:"user";s:\d:"\d+";/ });
[11:30:27] <Mez> that works fine for what I want :)
[11:30:41] <Mez> ( just making a counter of "current users logged in"
[11:30:53] <RaviTezu> can some one help?
[11:59:06] <RaviTezu> Hi, use local, db.oplog.rs.find().count() is showing a "768000" and a newly added rs member is giving "1".. and the dbs are having less size the dbs on the primary.
[11:59:16] <RaviTezu> any help? what went wrong here?
[11:59:40] <RaviTezu> "768000" on primary and old secondaries
[12:00:05] <RaviTezu> the dbs are having less size than the dbs on the primary.
[12:01:37] <tiller> Hi!
[12:17:04] <kali> if mongodb is to nosql what mysql is to sql... i'm not sure that so bad
[12:17:11] <kali> i wonder if it's not flattering actually
[12:21:13] <dawik> does anyone know if the C driver/client api has changed (if so how much) between 0.6 and 0.8 (current)?
[12:36:52] <RaviTezu> Initial sync is not copying all the db data from primary node. any help? and i see oplog.rs count on primary is a big number and oplog.rs count is 1 on new node.
[12:37:15] <RaviTezu> can i take this as ..oplog has been rotated?
[12:37:26] <RaviTezu> can some one help?
[12:41:14] <kali> RaviTezu: i'm not sure this is not nominal
[12:42:20] <kali> RaviTezu: no, you're right.
[12:42:55] <kali> RaviTezu: well, i'm not sure. are there other indication replication is not working ? what does the log says ?
[12:49:35] <RaviTezu> kali: logs from the new node : [rsSync] replSet initial sync done
[12:49:59] <RaviTezu> [rsSyncNotifier] replset setting oplog notifier to hostname:27017
[12:50:10] <RaviTezu> replSet SECONDARY
[12:51:20] <RaviTezu> but, I have 3 dbs on primary of which i have a db named "people".. which is of 20G.. when i do show dbs.
[12:51:41] <RaviTezu> but on this secondary, it is just 456MB only
[12:51:49] <RaviTezu> kali: ^^
[13:08:18] <RaviTezu> kali: any help?
[13:09:54] <kali> RaviTezu: is there something on the primary that is missing on the secondary ? file size can be very deceptive
[13:11:01] <algernon> dawik: yes, the API did change
[13:11:22] <RaviTezu> I see all the dbs and collections got copied to the new one. Only difference i find is... file sizes
[13:11:26] <RaviTezu> in db path
[13:11:54] <kali> RaviTezu: document count in collections are about right ?
[13:11:57] <RaviTezu> when i do a du -sh ... the people db is 20G on primary and 456M on new secondary
[13:12:07] <RaviTezu> kali: one sec checking
[13:14:24] <RaviTezu> kali: awesome! the count is same
[13:14:45] <RaviTezu> how come i have different sizes for the db?
[13:15:30] <kali> RaviTezu: http://docs.mongodb.org/manual/faq/storage/#why-are-the-files-in-my-data-directory-larger-than-the-data-in-my-database
[13:21:57] <dawik> algernon. thanks, yeah i found the older docs to it and had to ake my own makeshift Makefile now it seems to build \o/
[13:24:04] <RaviTezu> kali: thanks :) and just a note, i have old secondary node on this replica set.. and this is having same sizes as the primary node.
[13:24:04] <RaviTezu> This old secondary was added to replica set using rs.config() along with the current primary.
[13:24:35] <RaviTezu> it been like 1 month ... and now ..I'm adding this new node to the replica set.
[13:24:47] <kali> RaviTezu: it make sense, if it is older, it is likely to have a fragmentation ratio similar to the primary
[13:25:11] <kali> RaviTezu: it's like C14 datation :P
[13:25:24] <tomasso> why could that be that when i run this query: db.stops.find( { 'location' : { $near : { $geometry : { "type" : "Point", "coordinates" : [ -58.417016, -34.603585 ] } , $maxDistance : 50000 } } } ) it doesn't return any results, nor give me any error message? The index does exist in the collection, and it contains a location field, with the GeoJSON Point format.
[13:25:45] <tomasso> im using mongo 2.4.6
[13:26:01] <tomasso> and it supports 2dsphere indexes
[13:26:09] <kali> tomasso: the index is 2d or 2dsphere ?
[13:26:10] <kali> ok.
[13:26:19] <tomasso> kali: 2dsphere
[13:26:36] <kali> tomasso: what happen if you ditch the maxDistance ?
[13:27:20] <tomasso> kali: the samee
[13:27:24] <kali> tomasso: can you show me an example document this time ? :)
[13:27:36] <tomasso> sure sure..
[13:28:08] <taf2_> hello… i have a sharded cluster with no replica sets… what's the best way to add a replica set?
[13:28:26] <algernon> dawik: (if you need a C library with a stable API, I can recommend mine ;)
[13:32:17] <taf2_> i currently start my monogdb with --shardsvr
[13:32:28] <tomasso> kali: this is a sample json
[13:32:30] <tomasso> http://pastebin.com/ckVWfK4T
[13:32:47] <tomasso> one single document
[13:33:01] <taf2_> to add a replica i just tried addend --repliSet rs0 and now i see
[13:33:02] <taf2_> replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
[13:33:22] <taf2_> s/--repliSet rs0/--replSet rs0
[13:33:54] <kali> tomasso: try to replace "location" by "stop.info.location" in the query ?
[13:34:11] <tomasso> mm i tried that..
[13:34:25] <taf2_> should i be adding the replica set to the shard data servers or the config servers?
[13:34:44] <kali> tomasso: "location" can not work, that's for sure
[13:35:28] <tomasso> also the index does exist, shouldnt it have returned an error?, i will create another not for location, but for stop.info.location
[13:35:40] <kali> tomasso: also, the index must be on "stop.info.location" not this
[13:37:04] <dawik> unfortunately im not in a position to change the dependencies, its part of a larger system. but out of curiousity, which one is that?
[13:37:14] <dawik> --> algernon
[13:38:47] <algernon> dawik: https://github.com/algernon/libmongo-client
[13:39:13] <tomasso> db.stops.ensureIndex({'stop.info.location': '2dsphere'})
[13:39:13] <tomasso> ,
[13:39:25] <tomasso> i get "Can't extract geo keys from object, malformed geometry?:{ type: \"Point\", coordinates: [ \"-58.5418391\", \"-34.646114\" ] }"
[13:41:23] <kali> tomasso: coordinates are strings, they must be float
[13:41:34] <kali> tomasso: we're gettign somewhere :)
[13:42:31] <RaviTezu> can some one give me a query to get last doc in a collection?
[13:43:46] <tomasso> OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
[13:44:31] <tomasso> they should recall that in the tutorialll..also i doubted about that location field, when the field is inner
[13:45:00] <kali> tomasso: i think that's the last glitch, i've tested it here
[13:45:17] <tomasso> it should work by removing quotes right
[13:45:32] <kali> tomasso: http://uu.zoy.fr/p/puyusubo#clef=ksdjccybthsvesqn
[13:45:36] <kali> tomasso: yes
[13:45:38] <tomasso> le me chek
[13:46:15] <tomasso> i scratched my head so much..
[13:46:18] <kali> tomasso: sorry i haven't pay enough attention the other day
[13:46:26] <tomasso> Np
[13:46:50] <tomasso> thank you, so much
[13:51:56] <RaviTezu> I ran "db.setSlaveOk()" on one of the secondary nodes, this allows me to check the docs, count of docs... on the db..
[13:52:21] <RaviTezu> is this a permanent setting? if yes, how i can revert this back?
[13:52:28] <RaviTezu> any help? :)
[13:52:44] <kali> RaviTezu: nope, it dies when you close the mongo shell
[13:53:10] <kali> RaviTezu: it's local to your shell
[13:53:50] <RaviTezu> thanks kali
[14:33:15] <pebble_> Yellow. I have a setup on EC2 with EBS and replica sets on different hosts.I'm thinking of moving the journal and oplog to a different EBS drive on the primary host
[14:36:58] <pebble_> I suppose what I want to ask is , is there going to be overhead with having the db and the journal/oplog on different disks on EBS (which talks over the network)
[14:50:55] <taf2_> just added a replica to an existing sharded server… watching the logs the new replica is in an almost constant state of startup2
[14:51:05] <taf2_> printing stuff like:
[14:51:06] <taf2_> Index: (1/3) External Sort Progress: 4200100/10936610 38%
[14:51:16] <taf2_> and then
[14:51:17] <taf2_> Index: (2/3) BTree Bottom Up Progress: 8414500/10936564 76%
[14:51:33] <taf2_> but then it keeps cycling back and sorting everything over again
[14:52:07] <taf2_> and stats in the startup2 phase or now recovering phase because i stopped it after it cycled about 3-4 times
[14:52:09] <taf2_> any ideas?
[14:53:30] <taf2_> is it trying to get in sync and can't?
[14:57:13] <taf2_> back to Index: (1/3) External Sort Progress: 1340000/10936610 12%
[15:03:25] <svector> is it advisable to use mongo with GORM for grails ?
[15:03:40] <svector> i've been using it with mysql
[15:05:00] <kevcha> hi guys, i'm having trouble for an app running with mongodb 2.4.8 and php 5.4 with encoding
[15:05:13] <kevcha> all was working well until today
[15:05:26] <Derick> "with encoding" means?
[15:05:44] <kali> regreddit: are you sure it the same index each time ? it iterates over them
[15:05:52] <kali> nope. not regreddit
[15:05:53] <kevcha> i'm trying to insert a string from php, and mongo return an exeption code 12 : non utf-8 string
[15:06:00] <kali> taf2_: : are you sure it the same index each time ? it iterates over them
[15:06:06] <kevcha> but php mb_detect_encoding() function return utf-8
[15:06:20] <Derick> kevcha: mb_detect encoding isn't always 100% accurate
[15:06:22] <kevcha> i'm on mac os mavericks, and cannot understand why this is happening
[15:06:27] <Nodex> ^^ buggy
[15:06:38] <Derick> you can never reliably detect a strings encoding
[15:07:06] <kevcha> how to be sure to get utf-8 string in php then ?
[15:07:19] <kevcha> my locale are all set to en_US.UTF-8
[15:07:21] <Derick> you need to make sure you have one....
[15:07:34] <kevcha> i didn't change my php.ini and this is not working anymore today :/
[15:07:35] <Derick> show your script (as a pastebin)?
[15:07:43] <Nodex> I strip everything outside the UTF8 range
[15:07:54] <Derick> locales have nothing to do with encoding of strings in PHP
[15:08:36] <kevcha> it's from lithium framework Derick, that's a simple save() with data coming from a form
[15:08:59] <Derick> and the form is presented in an output that has the Content-Type: text/html; charset=utf-8 header?
[15:09:00] <kevcha> wait, i'm gonna write a pastebin with more low level interactions :)
[15:09:53] <taf2_> kali: you know that is probably what it is thank you
[15:09:58] <kevcha> application/x-www-form-urlencoded
[15:10:21] <taf2_> adding a replica set to a sharded cluster with no replicas… it seems to be working was a little bumpy but we're online
[15:10:26] <Derick> that's only the form kevcha
[15:10:30] <Derick> you need to check the HTTP headers
[15:11:58] <taf2_> the biggest issue i had was when i reconfigured to include the new replica it had not finished updating the new replica and i made the mistake of restarting the current primary… this caused it to become a secondary while the new replica was in startup2 phase… eventually to fix this i had to reconfigure to remove the startup2 replica and now the original replica is master primary and the app is happy
[15:13:23] <kevcha> Derick, Content-Type:text/html; charset=UTF-8
[15:17:08] <taf2_> w00t replica set is live
[15:18:52] <kfb4> hey, can someone give me a hand on the best way to update a set of docs?
[15:19:15] <taf2_> write a loop and call update?
[15:19:19] <kfb4> :)
[15:19:36] <kfb4> I'm trying to avoid resizing issues since i've got a very large dict in the docs
[15:21:35] <kfb4> basically, i'm keeping a top 500 map of {key: count} and I don't know if it's better to overwrite the whole map at once (i.e. {$set: {top_500: <value>}}) or to update it like {$inc: {top_500.foo: 3}, {$unset: {top_foo.bar: 1}}, {$set: {top_500.baz: 50}}.
[15:21:48] <kfb4> Is one of those going to be substantially better or worse than the other?
[15:22:23] <kfb4> Cause i don't know how document sizing/resizing will treat constantly dropping/adding fields vs just overwriting as a block.
[15:28:56] <kfb4> anyone have thoughts on it? i've seen disk usage spike and i'm trying to figure out if this is the root cause
[16:39:07] <rafaelhbarros> I'm gonna say something very wrong, please forgive me.
[16:39:33] <Nodex> no
[16:39:34] <rafaelhbarros> I have to work with app engine ndb, does anyone have a reference for someone that knows mongodb?
[16:39:43] <rafaelhbarros> sorry dude, I <3 money.
[16:39:50] <rafaelhbarros> even tho I <3 mongo too...
[16:39:55] <Nodex> money is the root of all evil :P
[16:40:13] <rafaelhbarros> but ndb? anything that I can argue with?
[16:41:26] <taf2> it's f'n good feeling to have my replica's now for my shards… sleeping at night knowing i can lose a server
[16:41:49] <taf2> once i realized it was not too big of a deal to bounce my shards and add a replica
[16:42:00] <taf2> also of course once we had the $$ to add the extra hardware :D
[16:42:00] <rafaelhbarros> taf2: sleeping 3 nights in a row after replicating is awesome.
[16:42:14] <rafaelhbarros> it goes on.
[16:42:25] <rafaelhbarros> everytime you think about that, every morning...
[16:44:06] <Nodex> rafaelhbarros : I don;t have a clue what ndb is
[16:44:27] <rafaelhbarros> app engine db written by Guido van Rossum
[16:44:42] <rafaelhbarros> doesn't matter who wrote it tho, it can suck a lot because he still has a boss.
[16:44:45] <rafaelhbarros> had*
[16:45:13] <Nodex> I don't understand what you're asking
[16:45:45] <rafaelhbarros> is there anything that you can argue against using it, anything that would help me to make a point and use mongo
[16:45:48] <rafaelhbarros> instead of ndb
[16:45:53] <rafaelhbarros> anything that I could be overlooking
[16:46:07] <Nodex> as I said, I don't have a clue what it is, there don't have a clue what it does
[16:46:12] <Nodex> therefore*
[16:47:15] <rafaelhbarros> Nodex: well, thanks anyway.
[16:48:06] <Nodex> what I would say which is true for any app, don't base it purely on money
[16:57:51] <chaotic_good> ok
[16:57:58] <chaotic_good> to restore a mongodump
[16:58:12] <chaotic_good> ona new box, do I initialize a replica first?
[17:00:28] <taf2> o i have the same question as chaotic_good
[17:00:52] <chaotic_good> robly not huh
[17:03:00] <vincnt> Yo, quick question, can I join a replicaset from the slave ?
[17:03:38] <vincnt> cant find anything on the internet
[17:03:40] <Derick> vincnt: no, but you can easily do it from the primary
[17:03:44] <Derick> rs.addHost()
[17:04:10] <vincnt> Derick: thanks for the answer
[17:10:32] <chaotic_good> the trick is to use the PORT when you run the add
[17:10:33] <chaotic_good> :)
[17:10:39] <chaotic_good> then slave adds ez
[17:10:41] <chaotic_good> ;)
[17:11:02] <chaotic_good> rs.add("blah.me.com:17017") for example
[17:11:11] <chaotic_good> ;)
[17:11:33] <Derick> you only need to add the port if it's not the default 27017
[17:16:02] <davi1015> I am looking to scan for chained replica in my clusters ( I have a lot). However looking at the docs, it doesnt seem to list if syncFrom would be an added field on chained secondaries or how you would go about determining if you are currently chained.
[17:16:17] <chaotic_good> so you set up new vm
[17:16:19] <chaotic_good> add mongo
[17:16:24] <chaotic_good> start mongo
[17:16:37] <chaotic_good> then do mongorestore on the folder when I backed up mongo with mongodump
[17:16:41] <chaotic_good> and life will be good?
[17:37:38] <chaotic_good> anyone here doing e-commerce with java script and mongo? and something like node?
[17:37:42] <chaotic_good> skip sql
[17:37:54] <chaotic_good> skip cherokee or aolserver
[17:37:58] <chaotic_good> skip perl php
[17:46:07] <Nodex> I'm doing mosly API's with node and mongo but I find it's async frustrating at times
[17:50:52] <Nodex> + not sure I would do e-commerce fully with Mongodb tbh, certainnly transactional data I would use somehitng else
[18:34:37] <chaotic_good> isnt mongo transactional?
[18:34:47] <chaotic_good> when slave is forced to disk?
[18:53:05] <gbhatnag> hi all - any reason why saving a new model (using mongoose's asynch model.save()) in a recursive method would not work? I'm having an issue where it's called thousands of times and nothing gets saved. When used outside of the recursive function with a few nodes, it works fine...
[18:55:57] <gbhatnag> here's the code: http://pastebin.com/L3vFkdZX
[19:42:15] <odin_mighty> hi
[19:42:18] <odin_mighty> i am new here
[19:42:32] <odin_mighty> would like to know how do i start off with learning mongodb
[19:42:38] <odin_mighty> and this whole noSQL thingy
[20:19:55] <steckerhalter> I always get "connection closed" using the native nodejs driver. I also tried the suggested tip from the FAQ with the timeout but it still happens quite often. ideas?
[20:20:22] <steckerhalter> next step would probably to try mongoose
[20:29:42] <stu_> Could anyone help me with http://stackoverflow.com/questions/19937499/mongodb-bean-compositedata-retrival ?
[20:48:12] <redondo> hi all! There are important differences between mongodb's document-oriented model and redis's key-value store aproch?
[20:48:15] <redondo> aproach
[20:48:23] <redondo> are there!
[20:48:25] <redondo> ?
[20:49:03] <kali> yes
[20:49:07] <kali> one is document oriented
[20:49:11] <kali> the other is key-value
[20:50:12] <kali> check out what is what there: http://en.wikipedia.org/wiki/NoSQL
[20:50:43] <redondo> kali, if we say that "documents" are BSON (binary JSON) doesn't that imply that "documents" are key-values data structures?
[20:50:53] <redondo> kali, I will, thanks.
[20:51:57] <kali> redondo: actually, either this article has been changed in the last few month, or i am confusing with something else
[20:52:29] <kali> redondo: it's less good than it was in my min
[20:53:17] <redondo> I ask again: if we say that "documents" are BSON (binary JSON) doesn't that imply that "documents" are key-values data structures?
[20:53:50] <kali> yeah, but that's not the point
[20:54:32] <kali> the point is what the database is aware of, and how the data can be accessed
[20:54:56] <jyee> redondo: i guess it depends on how you define key-value data stucture.
[20:56:26] <jyee> i mean, you could argue that for the vast majority of schemas, mysql is a key-value data store on a table level
[20:56:41] <redondo> jyee, my definition does not go beyond from what a JSON object is, which I cannot see really different from python dicts, or any key-value data structure, like mongodb's "document". Am I wrong?
[20:57:40] <jyee> redondo: you're not wrong, but saying that redis and mongo are both key-value is a bit misleading for the question
[20:58:43] <redondo> jyee, could you give me light on the differences between both models?
[20:58:55] <kali> redondo: a document is similar in role to a record in a sql database, and fields are similar to column. a python User class would be mapped to a collection, an instance to one single document
[21:00:16] <kali> redondo: in a k/v, the granularity is different: your user store would span a few numbers of k/v tables, each k/v atom would be a property value
[21:00:46] <jyee> redondo: i've haven't used redis, but as far as i know, it stores string values and variations of, so one major difference (again from my very limited exposure to redis) is that it can't do multiple levels of nested objects (or key-value pairs).
[21:01:02] <jyee> but i'd be happy to know if i'm wrong
[21:01:25] <kali> jyee: nope, that's about right. or else it is a blob for the database
[21:01:57] <kali> nothing prevent storing a complex json or json like structure in a redis value, but for redis it's opaque
[21:02:44] <kali> then of course, redis is a bit more clever than a brute k/v store with a few specific data structure, so it exceed the strict k/v store scope
[21:03:20] <kali> and mongodb exceed a strict document store scope too but in different planes or directions
[21:03:56] <redondo> kali, jyee, I'm getting it.
[21:04:43] <redondo> kali, but, sorry, when you say document I still listen key-value structure. Is that right?
[21:05:20] <kali> redondo: it is locally a k/v structure, sure
[21:07:10] <redondo> kali, so where or when it is not key-value? I suppose it is when that key-value strucutre is stored in a mongodb database.
[21:07:46] <redondo> I mean, when a key-value structure is stored in mongodb it is then more that just a key-value structure
[21:07:47] <redondo> ?
[21:07:48] <kali> redondo: i'm sorry, i wont get into this kind of discussion on irc. please check out the doc, and look at the mongodb api and redis api. the difference are obvious
[21:08:12] <redondo> kali, ok. sorry for bother.
[21:10:05] <redondo> Sure the differences are obvious, for those who can see them. I just wanted to have a little background to continue diving into docs and apis.
[21:13:08] <erg> hey, i have a mongoose query with find that i copy/pasted into another file. in the first case, it's returning objects, in the second it's returning a doc and i have to call toObject() on each result. it's confusing me why this could be the case. any idea?
[22:05:53] <padan> rather new to map/reduce stuff. thought something like this would work, but it returns nothing... says there were 0 emits. Can someone take a peek? JS for map/reduce: http://pastebin.com/39NKk6Y3 ... class I am storing: http://pastebin.com/GVfMZbX7 ... I can use normal find methods and find documents that meet the criteria
[22:05:59] <padan> not sure what i'm doing wrong
[22:08:21] <kali> padan: if you show me a sample document, i wont feel insulted by the c# paste
[22:08:46] <kali> padan: preferable one matchine the sample.LevelId == 3604199 condition
[22:09:02] <padan> sure
[22:09:04] <padan> moment
[22:09:28] <chaotic_good> use forth
[22:09:35] <chaotic_good> and simple data structures
[22:11:29] <padan> Unless there is a really good reason to not use the simple data structures I've got in place, I'd prefer to not rearchitect the solution.
[22:11:46] <padan> kali - odd, i'm not finding one now. let me look at that. i know it was there not long ago...
[22:11:52] <kali> i think you can safely ignore that chaotic_good comments :)
[22:12:09] <kali> padan: well, that would explain a few things, right
[22:12:36] <padan> i suppose i should be happy that mongo doesn't make things up
[22:12:37] <padan> :)
[22:13:40] <kali> yeah, usually idiotic bloggers complains more about it destroying things
[22:14:22] <chaotic_good> huh>?
[22:14:43] <chaotic_good> http://users.ece.utexas.edu/~adnan/pike.html
[22:14:48] <chaotic_good> www.colorforth.com
[22:18:19] <erg> why the random link to colorforth? :p
[22:20:52] <erg> forth day is nov 16. i'm going
[22:20:59] <erg> (at stanford)
[22:25:33] <padan> kali - thanks for the offer of help... it appears that it works if i use valid ids :)
[22:25:57] <kali> padan: no puzzle, then :(
[22:28:03] <chaotic_good> simplicity
[22:28:12] <chaotic_good> use simple data structures and algos
[23:08:28] <fredfred> all potential bugs should be reported in jira?
[23:29:20] <dangayle> Hi. I have a friend who tells me he's having issues with duplicate content in his mongodb. Are there some low-hanging fruit a dev should be on the look out for?
[23:30:15] <dangayle> I run our local user group, so I get to field all these questions :)