PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 6th of February, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:39:17] <themoebius_> I turned off balancing, but I'm still seeing a lot of lines like this:
[00:39:22] <themoebius_> Wed Feb 6 00:37:06 [cleanupOldData-51111624335e39b612de7d55] moveChunk deleted 48700 documents for pb3.hourly_stats from …..
[00:39:34] <themoebius_> and it's basically pegging my disk IO
[00:39:37] <themoebius_> why would this be?
[00:40:04] <themoebius_> it's been doing this for hours
[03:10:44] <ranmacar> Hello! i am having a bit of trouble with mongoose - the result of a query gets transformed into the query object it seems
[03:10:56] <ranmacar> here is part of the code: http://pastebin.com/8TRj2AMq
[03:11:14] <ranmacar> am i missing something?
[03:12:53] <ranmacar> the loadProfile function prints the correct json, but the app.post prints the query object somehow
[03:15:29] <ranmacar> anyone knows what is happening there?
[03:23:30] <ranmacar> anyone can see what is wrong in this? http://pastebin.com/8TRj2AMq
[04:51:35] <preaction> I have a document with a list of tags, how do I query for all documents with at least one tag matching from an array of tags to search for?
[04:52:08] <preaction> do I have to combine $in with $or? or with $in suffice?
[05:00:25] <preaction> nevermind. turns out if i actually read the docs instead of skim them, i'd know that if "field" is an array, it has to match only one of the array i give it
[05:57:34] <Vishnevskiy> Hello, does an upsert produce an oplog entry even if nothing was modified?
[06:12:14] <Vishnevskiy> Anyone here? =)
[06:52:16] <lledet> hi, is it possible to do something like owner: {$ne: null, this: 'some id'} in the same "clause"? (i'm brand new to mongo)
[08:34:53] <[AD]Turbo> hola
[09:42:07] <BadCodSmell> Are there any recent bugs that can cause importing bson to result in a segfaut?
[09:42:16] <BadCodSmell> segfoot*
[09:58:17] <Nodex> segfoot or segfault?
[10:03:14] <fdf> Hi, if i`m adding new replica to existing sharded cluster with rs.add() the data will be auto rebalance to the new replica ? thanks
[11:01:04] <BadCodSmell> I have journalling on, why do I still need this mongo.lock stuff?
[11:01:28] <Nodex> because it's a database lock
[11:02:04] <BadCodSmell> ok I'mm just rm it in init.d
[11:02:52] <Nodex> are you referring to an old lock file?
[11:03:45] <BadCodSmell> Yes
[11:03:49] <BadCodSmell> sounds easiest to just rm it
[11:04:55] <BadCodSmell> everytime before mongo starts
[11:05:59] <Nodex> or perhaps shut down mongo cleanly and you wont get it ;)
[11:12:21] <BadCodSmell> This is not an option
[11:12:39] <BadCodSmell> Really mongo was the wrong tool for the job but we're stuck with it
[11:12:58] <BadCodSmell> mongo is very unreliable
[11:15:23] <Nodex> it's fine for everyone else who uses it, many many high traffic sites
[11:20:36] <Seb> hi fellows
[11:21:05] <Seb> I have a replica set with one master, one slave, and one arbiter; my "foo" DB on the master is doing just fine, but somehow the slave chokes on replication with "replSet syncThread: 10320 BSONElement: bad type 104"
[11:21:13] <Seb> where should I be looking at ?
[11:21:40] <Seb> (all nodes of the RS are running the same version, namely 2.0.4)
[11:24:56] <Seb> tricky one, eh ? :)
[11:25:29] <Seb> I've already tried to stop the slave, wipe out all the foo DB files, drop the DB on the master, restore from a dump, then restart the slave, but it doesn't do any good
[11:25:50] <Seb> db.repairDatabase() on the master, although it completes successfully, doesn't fix the problem either
[11:42:56] <Seb> :(
[11:43:10] <Nodex> have patience
[11:52:31] <kali> Seb: you can investigate by looking at the oplog positions to see what update breaks the replication
[11:54:26] <Seb> kali: I'm having trouble comprehending at all how a perfectly working master DB is failing to sync to a slave...
[11:55:50] <kali> Seb: well, it should not happen, this is a bug.
[11:55:57] <Seb> kali: but anyway, would I start with rs.status() on the slave and get its oplog position ?
[11:56:07] <Seb> kali: I'm less clear on what to do *after* that
[11:57:35] <kali> look for the matching update in the local / oplor.rs collection
[11:57:40] <kali> oplog
[11:59:29] <Seb> kali: that would be on the master, then, right ?
[12:00:16] <Seb> kali: would you be so kind as how to telling what that query would look like ? I'm more on the Ops side of things than the Dev one ;)
[12:00:26] <Seb> s/as how/as/
[12:00:37] <kali> Seb: mmm if you're not comfortable with this, i can't realy help you
[12:01:55] <Seb> kali: I'm perfectly comfortable querying mongo and all; I just ahve no clue what the local/oplor.rs collection is looking like
[12:02:35] <kali> i won't write commands for you. this kind of stuff is too tricky to be done this way.
[12:02:44] <Seb> kali: I mean, if this is an upstream bug, I'm willing to go the extra mile to extract information that will lead to its resolution
[12:03:44] <Seb> kali: I figured it would be a reasonable request to have someone sorta lead me along the way in order to extract said information :|
[12:04:34] <kali> Seb: if the hints i gave you are not enough for you, your best guess may be to get some support from 10gen
[12:04:54] <Seb> kali: all right, cheers
[12:54:52] <msch> hi, are the points made in this article true? http://hackingdistributed.com/2013/01/29/mongo-ft/
[13:14:05] <Nodex> you mean you paid attention past the first 3 babbling paragraphs to read the rest?
[13:15:51] <solars> does anyone know sources discussing this recent mongodb "broken by design" article?
[13:16:34] <Nodex> msch the guy who wrote that article is retarded
[13:16:53] <Nodex> half of his methods are half wrong making him 1/4 right
[13:17:17] <solars> thats what I'm looking for
[13:17:19] <solars> :)
[13:17:20] <Nodex> solars . which article ?
[13:17:25] <msch> Nodex: is there a point-by-point refutation of his points? because some seem plausible
[13:17:28] <solars> http://hackingdistributed.com/2013/01/29/mongo-ft/
[13:17:47] <Nodex> msch : yes - the documentation :)
[13:18:04] <solars> but hes referring to the docs sometimes, that's why I'm asking
[13:18:32] <msch> Nodex: ok so "All bulk operations to a sharded collection run with ContinueOnError, which applications cannot disable."
[13:18:34] <msch> is true
[13:18:36] <msch> that's already a serious issue
[13:18:47] <msch> which makes me trust his other non-sourced stuff way more
[13:21:50] <msch> Nodex: also http://docs.mongodb.org/manual/core/write-operations/ clearly states that REPLICAS_SAFE just means that the write has been "propagated" to the other servers. that's not safe in any way
[13:22:29] <Nodex> it's written by yet another butt hurt developer who thought that Mongodb could save his app and was one shoe fits all only to find out it's not
[13:22:30] <kwizzles> ok, friend just told me the news, is it true? i mean wtf?
[13:22:54] <kwizzles> just tell me if its true? I can't believe it
[13:23:06] <Nodex> tell you what's true?
[13:23:07] <msch> kwizzles: it seems like both points I raised are true.
[13:23:21] <msch> kwizzles: e.g. bulk operations have ContinueOnError which can't be disabled
[13:23:42] <msch> kwizzles: and REPLICAS_SAFE isn't safe in any way of the word and there's no way to specify that you want something to be really safe
[13:23:49] <kwizzles> i just joined, didnt see what you wrote before msch
[13:23:53] <kwizzles> what both points?
[13:24:03] <kwizzles> ah ok
[13:24:05] <msch> kwizzles: http://docs.mongodb.org/manual/core/write-operations/ clearly states that REPLICAS_SAFE just means that the write has been "propagated" to the other servers. that's not safe in any way
[13:24:08] <Nodex> kwizzles : what are you wanting to know what's true?
[13:24:33] <kwizzles> buyou szre that REPLICAS_SAFE is not safe? I doubt that, because even it its name there is the phrase "SAFE"!!??
[13:24:33] <msch> kwizzles: and "All bulk operations to a sharded collection run with ContinueOnError, which applications cannot disable." which due to the semantics of getLastError means you can't even know if any but the last operation failed
[13:25:05] <kwizzles> why they call it replicas_SAFE then?
[13:25:17] <msch> kwizzles: yeah but read what the documentation says. REPLICAS_SAFE just means that 2 nodes received the write, not that they have persisted it.
[13:25:33] <msch> kwizzles: and there's no way to say "make sure 2 nodes received the write and have persisted it"
[13:25:39] <kwizzles> just read it
[13:25:44] <kwizzles> thats disgusting!
[13:25:53] <kwizzles> I feel like ab idiot now ...
[13:25:57] <msch> kwizzles: it is. the mongo people are lying to us :/
[13:26:01] <kwizzles> an*
[13:26:10] <kwizzles> that really is disgusting
[13:26:18] <kwizzles> :(
[13:26:20] <kwizzles> sad sad
[13:26:51] <Nodex> how are they lying?
[13:27:03] <Nodex> it clearly states that on BULK inserts certain things differ
[13:27:51] <Nodex> it also links a page which talks about it and how to manage it
[13:28:28] <msch> Nodex: where does it say anything about managing it? i don't see that
[13:28:47] <msch> Nodex: it's lying because that's not SAFE that's "KINDA_MAYBE"
[13:29:14] <msch> they should have called it PROPAGATED. that'd be honest
[13:29:30] <kali> omg, not again
[13:29:31] <solars> makes sense
[13:29:47] <Nodex> http://docs.mongodb.org/manual/core/write-operations/
[13:29:56] <Nodex> "If the bulk insert process generates more than one error in a batch job, the client will only receive the most recent error. All bulk operations to a sharded collection run with ContinueOnError, which applications cannot disable. See Strategies for Bulk Inserts in Sharded Clusters section for more information on consideration for bulk inserts in sharded clusters."
[13:30:09] <Nodex> which links to
[13:30:10] <Nodex> http://docs.mongodb.org/manual/tutorial/manage-chunks-in-sharded-cluster/#sharding-bulk-inserts
[13:30:43] <Nodex> http://docs.mongodb.org/manual/core/write-operations/#bulk-inserts <---- if you can't be bothered to read the page
[13:31:08] <Nodex> kali ++
[13:31:40] <msch> Nodex: i read that. i want bulk inserts because "Bulk insert can significantly increase performance by amortizing write concern costs". But in sharding-bulk-inserts it still says I can't have bulk inserts with getLastError.
[13:31:47] <Nodex> the large array of high profile users of MongoDB are null and void because some troll wrote an article, oh noes
[13:32:27] <msch> Nodex: can you please keep the the actual issues. i read that article and i'm only asking about those two issues. because they actually are important
[13:32:39] <Nodex> keep the issues?
[13:32:55] <msch> Nodex: sorry should've read "keep to the actual issues"
[13:33:05] <Nodex> msch : as with most things Mongo .. if you dont like it or it doesn't suit your needs then don't use it
[13:34:36] <msch> Nodex: well yeah sure. but I would like to use mongo. it's only that the devs make decisions that I can't understand that make me complain. how hard can it be to introduce a getLastErrorsForBulkOperation or to call something by it's real name PROPAGATED instead of lying and saying it's SAFE?
[13:34:40] <Nodex> the very simple answer is dont use bulk inserts, they are NOT mandatory, they increase performance when loading data by trading off on write concerns
[13:34:59] <Nodex> where have they lied?
[13:35:00] <kwizzles> what are good alternatives to mongodb?
[13:35:07] <Nodex> kwizzles : mysql :)
[13:35:10] <msch> Nodex: calling something SAFE that isn't
[13:35:14] <kwizzles> I am seriously thinking to use CSV files again
[13:35:15] <kali> Nodex: got me on this one :)
[13:35:16] <msch> kwizzles: we use postgresql and it's pretty great
[13:35:21] <kwizzles> at least CSV does not lie
[13:35:22] <msch> kwizzles: what's your use case?
[13:35:31] <Nodex> it clearly states in the Docs that it has no write concern
[13:35:48] <dorong> hello
[13:35:56] <Nodex> people can't blame Mongodb if they cant be bothered to read the docs
[13:36:05] <kwizzles> i need real data integrity
[13:36:12] <kwizzles> i used oracle a lot in business
[13:36:23] <Nodex> the article writer has blown it out of context claiming that everything is unsafe
[13:36:27] <kwizzles> but for $$ reasons i cant use it everywhere
[13:36:32] <msch> Nodex: but why did they call it SAFE. that's dishonest. that's like if I offer you a "working thing" and in the fineprint it says "oh yeah and what I call "working" actually means "broken""
[13:36:35] <kwizzles> i used postgres a bit too and was happy
[13:36:45] <Nodex> msch : it's only unsafe on BULK
[13:36:52] <msch> kwizzles: you will be happy with Postgres then. They take data integrity serious.
[13:37:01] <msch> Nodex: yes. which means it's not safe. how hard is this to understand?
[13:37:03] <Nodex> kwizzles : if you need transactions and RDI then dont use mongo
[13:37:07] <kwizzles> then they should call it REPLICA_SAFE_SOMETIMES
[13:37:11] <Nodex> OMFG how thick are you
[13:37:22] <Nodex> read the docs
[13:37:24] <Nodex> "Bulk insert can significantly increase performance by amortizing write concern costs. In the drivers, you can configure write concern for batches rather than on a per-document level."
[13:37:46] <Nodex> IT ONLY HAPPPENS ON BULK INSERTS
[13:37:49] <dorong> what does the "maintenanceMode" item with a value of -3 mean inside the primary member of rs.status() ? (meaning, when running db.status(), the following key members[0].maintenanceMode equals -3)
[13:37:51] <Nodex> 6 times
[13:37:53] <msch> kwizzles: i'll just ignore him. anything you want to know about Postgres?
[13:38:09] <kwizzles> is there a howto to switch from mongo to a real database like postgres
[13:38:09] <Nodex> msch, you're retarded, you want answers but dont want to hear them
[13:38:13] <kali> msch: this if off topic, please take this conversation somewhere else.
[13:38:16] <Nodex> kwizzles : /j #postgres
[13:38:17] <kwizzles> export import like
[13:38:28] <Nodex> go to PM or something
[13:38:41] <kwizzles> mongodb seriously could use the line "Nomen est Omen" as a line
[13:38:52] <kwizzles> fucking still cant believe it
[13:39:03] <msch> kwizzles: https://stripe.com/blog/announcing-mosql seems perfect for migrating a live system off of mongo and to postgres
[13:39:28] <kwizzles> Just talked to tech manager and we might go to sue the Mongos
[13:39:35] <kwizzles> thanks msch
[13:39:58] <msch> kwizzles: sure. happy to help
[13:40:06] <Nodex> lol sue the mongos lmfao
[13:40:36] <kwizzles> I guess this stuff won't ever get fixed
[13:40:45] <Nodex> what "stuff" are you referring to?
[13:40:48] <kwizzles> the amazing thing is that its not a bug but a serious design flaw
[13:40:58] <Nodex> where does it claim to have RDI ?
[13:41:18] <kali> it's not a design flaw, it's a design choice
[13:41:22] <Nodex> ++
[13:41:24] <kwizzles> 1440.00 < kali> it's not a design flaw, it's a design choice
[13:41:26] <kwizzles> HAHA
[13:41:28] <kali> i should not even talk to you.
[13:41:34] <kali> this is a troll
[13:41:38] <Nodex> troll ++
[13:41:50] <msch> kwizzles: http://www.postgresql.org/docs/9.2/static/warm-standby.html#SYNCHRONOUS-REPLICATION explains Postgres replication. it works like you'd expect it to.
[13:42:02] <msch> kwizzles: also read about the WAL because that's different from Oracle afaik http://www.postgresql.org/docs/9.2/static/wal.html
[13:42:04] <usestrict> are we SAFE, yeti?
[13:42:06] <kali> msch: keep the converstaion in topic please.
[13:42:12] <Nodex> aww poor me, mongodb doesn't do what I want... it must be the fault of the developers that wrote it not my design
[13:43:30] <kwizzles> ah yea WAL is the redolog thing in oracle
[13:43:52] <kwizzles> Nodex: your simplyfing things ...
[13:44:00] <Nodex> yes I am
[13:44:03] <Nodex> for a good reason
[13:44:06] <usestrict> seems to boil down to: if you want safe inserts, don't use bulk insert
[13:44:14] <Nodex> Boom - nail on the head
[13:44:19] <Nodex> usestrict :)
[13:44:26] <msch> usestrict: it boils down to: if what you call a safe insert is not safe then you're dishonest
[13:44:41] <Nodex> it secondly boils down to RTFM
[13:44:45] <kwizzles> i think this link says everything: http://googlefight.com/index.php?lang=en_GB&word1=mongodb&word2=postgresql
[13:45:02] <kwizzles> and threads are unsafe too?
[13:45:06] <usestrict> dishonest, or slightly optimistic, or slightly sloppy in naming your functions ;)
[13:45:20] <Nodex> kwizzles LOL ... postgres = 15 years or google indexing, mongodb = 5 years
[13:45:24] <Nodex> of *
[13:46:28] <kwizzles> still postgresql > Mongo
[13:46:36] <Nodex> they're 2 different databases
[13:46:49] <Nodex> where does MongoDB claim to be a drop in replacement for anything?
[13:47:56] <kwizzles> yea its not nosql sure
[13:48:19] <kwizzles> anyways, coworker just laughed at me and said he thought that already in 2012 everybody switched to Cassandra
[13:48:21] <kwizzles> might check that out
[13:48:31] <Nodex> best of luck with your venture :)
[13:48:41] <kwizzles> you sire are
[13:48:44] <msch> kwizzles: i know guys that switched to riak and are happy there. might want to add this to your list
[13:49:15] <Nodex> let us know how you get on suing? (spelling) 10gen
[13:49:23] <kwizzles> http://diegobasch.com/ill-give-mongodb-another-try-in-ten-years
[13:49:31] <kwizzles> haha
[13:49:38] <algernon> if you're going to give it another try, it can't be that bad.
[13:49:40] <Nodex> another troll article
[13:49:40] <kwizzles> so thats the 15-5 you mentioned Nodex
[13:49:58] <kwizzles> Nodex: you are the greatest troll I ever met
[13:50:20] <Nodex> why thankyou
[13:50:26] <Nodex> it's been an honour
[13:50:27] <kwizzles> ok, i am leaving, it gets boring, i dont use database, just wanted to troll a bit
[13:50:30] <kwizzles> cheers
[13:50:39] <algernon> I do wonder if he's using text files.
[13:50:46] <algernon> but.. that's a kind of db too *shrug*
[13:50:52] <Nodex> kids ....
[13:51:06] <Zelest> but he's right?
[13:51:11] <Nodex> lmao
[13:51:14] <Zelest> (havn't read the backlog, just fueling his fire)
[13:51:22] <Zelest> :D
[13:51:25] <algernon> right or not, the presentation wasn't entertaining
[13:51:36] <Nodex> both of those idiots were trolling
[13:51:49] <Nodex> I imagine msch is the author' friend
[13:52:52] <usestrict> do people seem to be generally unaware that using nosql-stuff, which essentially is fancy kv-stores, is different from using RDBMS?
[13:53:13] <Nodex> I can imagine that even db's with crazy high integrity one can still lose data when a machine crashes or w/e and nothing has been flushed to disk
[13:53:35] <Nodex> usestrict : people don't seem to get that point that they're 2 different tools
[13:53:43] <usestrict> Nodex: seems so
[13:53:57] <Nodex> they seem to think that MongoDB claims to replace *SQL / RDBMS and works for every solution
[13:54:10] <usestrict> slightly exaggerated by the fact that nosql-solutions are advertized as "databases"
[13:54:35] <Nodex> well they are "databases" due to the definition of a database
[13:54:39] <Nodex> a CSV file is a database
[13:54:48] <usestrict> yes, but not in the sense people got used to RDBMS
[13:55:03] <Nodex> "A structured set of data held in a computer, esp. one that is accessible in various ways"
[13:55:09] <usestrict> mhm
[13:55:14] <Nodex> most RDBMS users have no sense :P
[13:55:22] <Nodex> which has just been proven
[13:55:32] <usestrict> oh, if you coung mysql as RDBMS, probably ..
[13:55:35] <usestrict> count
[13:55:42] <Nodex> sorry that's a lie, they have a relational grasp of sense LOL
[13:56:40] <Nodex> I imagine a few more trolls over the next few days while that article does the rounds of the interwebs. No doubt being replicated in a few Mongo shards / Replica sets along the way
[13:59:21] <Nodex> now the article makes sense. the author is one of the main devs for a rival database/store
[14:25:11] <Zelest> what about find_and_modify? is this feature "bad" in sense of performance?
[14:30:29] <Nodex> depends on indexes
[14:30:42] <Nodex> it's really just an atomic way of doing a find then an update
[15:01:04] <Zelest> Nodex, ah
[15:35:49] <Zelest> are integers in mongodb 32bit or 64bit by default?
[15:48:25] <dorong> what does the "maintenanceMode" item with a value of -3 mean inside the primary member of rs.status() ? (meaning, when running db.status(), the following key members[0].maintenanceMode equals -3)
[15:48:47] <Nodex> 32 bit @ Zelest
[15:48:54] <Zelest> ah
[15:49:09] <Zelest> might be ruby that does some wierd magic
[15:49:21] <Zelest> an int of value 0 comes out as a float, 0.0 in ruby :o
[15:49:47] <Nodex> https://gist.github.com/anonymous/4723426 ^^
[15:50:11] <Nodex> it's probably the precision ruby uses
[15:50:19] <Zelest> aah
[15:50:42] <Nodex> I'm just speculating as I don't use ruby
[15:51:22] <Zelest> well, tried the same query in my db and it seems right
[15:51:42] <Zelest> couldn't care less how ruby handles it internally, just worried about wasting extra space for no reason, :)
[15:51:50] <Nodex> pretty sure find({foo:0}); will match 0.0 too
[15:52:02] <Zelest> mhm
[15:56:11] <Zelest> hmms, i've never used mongo that way, but can one use fields in the query? e.g, if I have date field and a integer, can i do something like find({date: {$gte: new Date(somefield_in_this_doc)}}) ?
[15:56:42] <Zelest> what I'm curious about is to have a "find things older than X minutes" where I wish to define X per doc
[15:57:02] <Zelest> is this possible+
[16:03:21] <Nodex> you can't use "this" unless you're map/reducing
[16:03:28] <Nodex> this being the current doc
[16:03:37] <Zelest> ah
[16:03:51] <Zelest> and when you do, it doesn't take advantage of indexes, does it?
[16:04:03] <Nodex> I am not sure on that, I don't map/reduce
[16:05:00] <Nodex> it kinda would be a nice(ish) thing to be able to do tbh
[16:05:17] <Nodex> there are a lot of use cases for it
[16:06:42] <sander> When compiling mongoclient I want it to be written to a configuration folder (debug or release). The "--prefix=" flag doesn't work. Is this possible and if so, how could I approach this?
[16:07:04] <Nodex> is this driver specific?
[16:07:17] <sander> the command I use to build is "scons mongoclient (--dd)"
[17:01:04] <Firebalrog> Nice breakdown of couchdb redis, and mongodb
[17:01:08] <Firebalrog> https://plus.google.com/107397941677313236670/posts/LFBB233PKQ1
[17:01:49] <Nodex> redis is not really a database
[17:02:28] <owen1> i want to add a compound index - app_id and user_id. i only have 3 different apps ids and many many user ids. does it mean the user_id should be the first index and the app_id second?
[17:02:47] <Nodex> owen1 : the indexes are ltr
[17:03:05] <Nodex> app_id,uid would be ok for app_id + uid or just app_id
[17:03:12] <Nodex> but NOT just UID
[17:03:27] <Firebalrog> redis = memcache with some persistence and other additional features
[17:03:30] <owen1> Nodex: what's ltr
[17:03:34] <Nodex> left to right
[17:03:42] <Firebalrog> rtl
[17:03:43] <Nodex> Firebalrog : redis is awesome
[17:04:06] <owen1> Nodex: why not uid, app_id? i only have 3 different app ids.
[17:04:17] <owen1> the order is important
[17:04:20] <Nodex> owen1 : either way they all end up in the index
[17:04:29] <Nodex> it's either 3x N or Nx3
[17:04:35] <Nodex> = same number ;)
[17:04:53] <owen1> Nodex: "The order of fields in a compound index is very important. In the previous example, the index will contain references to documents sorted first by the values of the item field and, within each value of the item field, sorted by the values of location, and then sorted by values of the stock field."
[17:04:53] <Nodex> what you should be doing with indexes is creating them to make the best use of queries
[17:05:07] <owen1> http://docs.mongodb.org/manual/core/indexes/
[17:05:13] <Nodex> yes it's important because indexes can be re-used
[17:05:18] <Nodex> as I have said above ^^
[17:06:17] <owen1> Nodex: ok. so if i know i'll have some finds based on uid, i should have uid, app_id, right?
[17:06:22] <Nodex> it's not important for the reason you're thinking which is the number of possible permiatations of the index
[17:06:36] <Nodex> foreget the fields, think aboiut your wieries
[17:06:39] <owen1> the left index can be reused
[17:06:39] <Nodex> queries*
[17:07:04] <Nodex> if you index on uid+aid then you can use uid+aid in a query and also uid on it's own
[17:07:15] <owen1> got it. awesome
[17:07:23] <Nodex> but if you did aid first then you cant use uid on it's own
[17:07:34] <Nodex> pretty sure it's on that same page you quoted from
[17:07:43] <Nodex> it explains it a bit better than I can
[17:07:58] <owen1> Nodex: i have existing collection (about 5 million objects)? can i add an index?
[17:08:09] <owen1> currently it has nono
[17:08:11] <owen1> none
[17:08:20] <Nodex> yer but if you're worried about performance make it work in the background
[17:08:25] <Nodex> background:true
[17:08:42] <owen1> Nodex: wow. cool.
[17:08:57] <Nodex> ;)
[17:09:03] <owen1> Nodex: and in a replica set, i have to add an index to the primary?
[17:09:14] <Nodex> yeh
[17:09:31] <Nodex> tbh I am not 100% sure what happens with indexes on secondaries
[17:09:49] <Nodex> I dont know if the exact params get passed over the wire and they all build together or what
[17:10:19] <Nodex> it would be more efficient to let the secondaries build first and shift reads / primary status to them on large indexes but I a not sure if that's built in
[17:10:27] <Nodex> (then shift primary back)
[17:11:11] <owen1> (not related) - using the node client from 10gen, how to set the read preference to 'nearest' ?
[17:11:43] <Nodex> I dont have a clue but the PHP driver you feed it a seed list so perhaps the same?
[17:11:54] <Nodex> seed list / DSI string
[17:14:05] <owen1> that's what i currently do (3 hosts) - 'mongodb://push.np.wc1.yellowpages.com,push2.np.wc1.yellowpages.com,push3.np.wc1.yellowpages.com/push'
[17:14:20] <owen1> is that similar in php?
[17:16:39] <Nodex> yer
[17:17:05] <owen1> Nodex: so who do u say: read from secondaries and do that with 'nearest'
[17:17:34] <owen1> since the default is read from primary only.
[17:20:50] <Nodex> with tags iirc
[17:20:54] <Nodex> let me find some docs
[17:21:02] <owen1> Nodex: man, u r awesome
[17:21:55] <Nodex> I think (iirc) ;nearest
[17:21:56] <Nodex> http://docs.mongodb.org/manual/applications/replication/#nearest
[17:23:04] <Nodex> http://mongodb.github.com/node-mongodb-native/api-generated/db.html
[17:24:15] <Nodex> there is some examples, you should be able to work it out from them
[17:24:34] <owen1> reading the last lin
[17:24:36] <owen1> k
[17:25:34] <Nodex> var db = new Db('integration_test_', replSet, options);
[17:25:44] <Nodex> where options = an object -> {}
[17:26:09] <Nodex> {var db = new Db('integration_test_', replSet, option);}
[17:26:11] <Nodex> ffs
[17:26:26] <Nodex> var db = new Db('integration_test_', replSet, {readPreference:'nearest'});
[17:26:49] <owen1> i use MongoClient.connect(url, function)
[17:28:01] <Nodex> var mongoclient = new MongoClient(new Server("localhost", 27017, {readPreference:'nearest'}));
[17:28:18] <Nodex> ReadPreference.NEAREST
[17:28:23] <Nodex> something like that
[17:28:55] <owen1> Nodex: i'll read about the connect function. thanks!
[17:32:41] <barroca> Hello, I wasn't able to find the answer searching in the Interwebs, I have a list, with saleid, and I have to update those with the same saleid and insert the new ones. update function do the trick, but I wasn't able to create a simple criteria
[17:33:36] <barroca> I'm iterating over the list and updating like this: db.collection.update({'@id':saleid},sale,True)
[17:34:15] <Nodex> not sure I understand the question barroca
[17:36:24] <barroca> db.collection.update({'saleid':<saleid>},sale,True)
[17:36:28] <barroca> I'm iterating over the list and updating like this: db.collection.update({'@id':saleid},sale,True)
[17:38:30] <barroca> Nodex: thanks for the help, to update elements on a DB using the update fuction, if they were in a list, do I need to iterate over the elements of that list and use the field as the criteria?
[17:39:35] <owen1> Nodex: i think this should work: 'mongodb://push.np.wc1.yellowpages.com,push2.np.wc1.yellowpages.com,push3.np.wc1.yellowpages.com/push?readPreference=nearest'
[17:39:36] <barroca> Nodex: or do is there a better way on defining the criteria, let's that my list is [{saleid:1,data:DDD},{saleid:2,data:DDD}]
[17:39:44] <owen1> acccording to http://docs.mongodb.org/manual/reference/connection-string/#connections-standard-connection-string-format
[17:40:28] <barroca> Nodex: and I wish to add all the elements on this list, updating those on the DB that have the same saleid
[17:44:51] <owen1> db.foo.getIndexes() return empty array. do i have to manualy add index for _id?
[17:46:14] <Nodex> it should already be there owen
[17:46:47] <owen1> Nodex: yeah. it exist in one collection but not in the other
[17:47:06] <Nodex> barroca : so you want to update all documetns where data=DDD ?
[17:47:17] <Nodex> owen1 : perhaps it's syncing ?
[17:47:45] <owen1> Nodex: it's on my local machine. nothing is syncing, i think
[17:48:07] <Nodex> does it have any data inside it ?
[17:48:17] <Nodex> (the foo collection) ?
[17:49:14] <owen1> Nodex: yeah, i'll just drop this db
[17:49:49] <owen1> also, does it matter if i add the index before or after rs.initiate(config) ?
[17:49:59] <owen1> (it's part of my setup script)
[17:50:09] <Nodex> not sure it matters, it will eventualy be pushed
[17:52:40] <barroca> Nodex: might be, but I'd like to update all documents that have the same saleid that i have on the list. I've actually solve this using http://pastebin.com/FiMPqQ6X but I don't if the processing is worser than one liner that might exist to solve the problem.
[17:55:14] <Nodex> if you want to update multiple documents with the SAME data then you can do it in one line else you HAVE to itterate
[17:59:30] <barroca> Nodex: Thank you.
[18:41:35] <bobbytek> What's the easiest way to query grandchildren array length?
[18:52:58] <Nodex> you cant
[18:53:08] <Nodex> you MUST store the length in a field and query that
[18:54:08] <bobbytek> Nodex: not even using a double $unwind?
[18:54:46] <Nodex> you said nothing about aggregation framework
[18:55:26] <bobbytek> Sorry, ya assume mongo 2.2
[18:55:27] <bobbytek> :)
[18:55:46] <Nodex> that still doesnt assume aggregation framework
[18:55:58] <Nodex> and I still don't think it's possible
[19:03:39] <bobbytek> Nodex: it also does not NOT assume it
[19:03:49] <bobbytek> Nodex: and it is indeed possible
[19:04:52] <Nodex> care to share?
[19:06:29] <bobbytek> sure one sec, just putting the finishing touches on
[19:06:56] <bobbytek> it is essentially project-unwind-project-unwind
[19:07:11] <bobbytek> followed by group with sum
[19:07:34] <Nodex> sounds messy
[19:08:22] <kali> +1
[19:08:56] <bobbytek> it is, but then so is mongo
[19:09:29] <bobbytek> and you can blame mongo for not being expressive enough
[19:10:43] <Nodex> lol
[19:11:00] <Nodex> but then again I plan my things inn advance and store the count - no blame
[19:12:26] <bobbytek> well, there is $size expression but for some odd reason that isn't available in this context.
[19:12:37] <bobbytek> Nodex: I guess you never use db.collection.count()?
[19:12:44] <bobbytek> You store that in advance too?
[19:12:53] <Nodex> lol
[19:12:56] <Nodex> semantics
[19:13:03] <kali> db.collection.count() is O(1)
[19:13:22] <kali> your $unwind and $group and $unwind is O(n2)
[19:13:54] <bobbytek> it also prevents me from reading stuff into the client
[19:14:15] <bobbytek> and the n here is very very small
[19:14:32] <bobbytek> for the children and sub-children I mean
[19:23:21] <Sharcho> [Ruby on Rails] I'm trying to setup errbit. Any idea why I get the error "db_name must be a string or symbol" when running from passenger, but not when running "rails server" with thin?
[19:38:11] <Sharcho> Nevermind, figured it out.
[19:38:51] <chopchop> hello
[19:39:07] <chopchop> is there a way to update n records and not all records ?
[19:59:57] <l2trace99> can anyone tell me how to db.collection.remove({_id: /pattern/ }) with the php driver
[20:00:32] <l2trace99> doing $mongo->collection->remove(array('_id' => '/pattern/' ) ) ;
[20:00:42] <l2trace99> isn't working for me
[20:04:00] <klusias> hello. sometimes value is stored as int, sometimes as string. how can i do proper find(), to get both - string and int values?
[20:48:38] <tab1293> how can i get all the ids of the torrents in a data structure like this {torrents: [ {id:1, name:1}, {id:2, name:2}]}
[20:55:41] <Bilge> lol
[20:55:54] <Bilge> Get a load of this guy
[20:58:55] <tab1293> Bilge: ?
[21:28:22] <danshipper> hey guys I'm having a problem with Mongo's MMS (monitoring system). does anyone have experience with it?
[21:29:16] <danshipper> basically what's happening is the monitoring service is only picking up my database.sessions collection. none of the other connections are being picked up
[21:29:28] <danshipper> i have the same issue with newrelic, so I'm assuming it's some sort of mongo configuration issue
[21:39:23] <danshipper> does anyone have somewhere they could point me to read about how i could debug this?
[21:39:39] <danshipper> ive read through the docs and filed a bug report already