[03:12:53] <ranmacar> the loadProfile function prints the correct json, but the app.post prints the query object somehow
[03:15:29] <ranmacar> anyone knows what is happening there?
[03:23:30] <ranmacar> anyone can see what is wrong in this? http://pastebin.com/8TRj2AMq
[04:51:35] <preaction> I have a document with a list of tags, how do I query for all documents with at least one tag matching from an array of tags to search for?
[04:52:08] <preaction> do I have to combine $in with $or? or with $in suffice?
[05:00:25] <preaction> nevermind. turns out if i actually read the docs instead of skim them, i'd know that if "field" is an array, it has to match only one of the array i give it
[05:57:34] <Vishnevskiy> Hello, does an upsert produce an oplog entry even if nothing was modified?
[11:21:05] <Seb> I have a replica set with one master, one slave, and one arbiter; my "foo" DB on the master is doing just fine, but somehow the slave chokes on replication with "replSet syncThread: 10320 BSONElement: bad type 104"
[11:25:29] <Seb> I've already tried to stop the slave, wipe out all the foo DB files, drop the DB on the master, restore from a dump, then restart the slave, but it doesn't do any good
[11:25:50] <Seb> db.repairDatabase() on the master, although it completes successfully, doesn't fix the problem either
[13:18:47] <msch> which makes me trust his other non-sourced stuff way more
[13:21:50] <msch> Nodex: also http://docs.mongodb.org/manual/core/write-operations/ clearly states that REPLICAS_SAFE just means that the write has been "propagated" to the other servers. that's not safe in any way
[13:22:29] <Nodex> it's written by yet another butt hurt developer who thought that Mongodb could save his app and was one shoe fits all only to find out it's not
[13:22:30] <kwizzles> ok, friend just told me the news, is it true? i mean wtf?
[13:22:54] <kwizzles> just tell me if its true? I can't believe it
[13:23:07] <msch> kwizzles: it seems like both points I raised are true.
[13:23:21] <msch> kwizzles: e.g. bulk operations have ContinueOnError which can't be disabled
[13:23:42] <msch> kwizzles: and REPLICAS_SAFE isn't safe in any way of the word and there's no way to specify that you want something to be really safe
[13:23:49] <kwizzles> i just joined, didnt see what you wrote before msch
[13:24:05] <msch> kwizzles: http://docs.mongodb.org/manual/core/write-operations/ clearly states that REPLICAS_SAFE just means that the write has been "propagated" to the other servers. that's not safe in any way
[13:24:08] <Nodex> kwizzles : what are you wanting to know what's true?
[13:24:33] <kwizzles> buyou szre that REPLICAS_SAFE is not safe? I doubt that, because even it its name there is the phrase "SAFE"!!??
[13:24:33] <msch> kwizzles: and "All bulk operations to a sharded collection run with ContinueOnError, which applications cannot disable." which due to the semantics of getLastError means you can't even know if any but the last operation failed
[13:25:05] <kwizzles> why they call it replicas_SAFE then?
[13:25:17] <msch> kwizzles: yeah but read what the documentation says. REPLICAS_SAFE just means that 2 nodes received the write, not that they have persisted it.
[13:25:33] <msch> kwizzles: and there's no way to say "make sure 2 nodes received the write and have persisted it"
[13:29:56] <Nodex> "If the bulk insert process generates more than one error in a batch job, the client will only receive the most recent error. All bulk operations to a sharded collection run with ContinueOnError, which applications cannot disable. See Strategies for Bulk Inserts in Sharded Clusters section for more information on consideration for bulk inserts in sharded clusters."
[13:31:40] <msch> Nodex: i read that. i want bulk inserts because "Bulk insert can significantly increase performance by amortizing write concern costs". But in sharding-bulk-inserts it still says I can't have bulk inserts with getLastError.
[13:31:47] <Nodex> the large array of high profile users of MongoDB are null and void because some troll wrote an article, oh noes
[13:32:27] <msch> Nodex: can you please keep the the actual issues. i read that article and i'm only asking about those two issues. because they actually are important
[13:32:55] <msch> Nodex: sorry should've read "keep to the actual issues"
[13:33:05] <Nodex> msch : as with most things Mongo .. if you dont like it or it doesn't suit your needs then don't use it
[13:34:36] <msch> Nodex: well yeah sure. but I would like to use mongo. it's only that the devs make decisions that I can't understand that make me complain. how hard can it be to introduce a getLastErrorsForBulkOperation or to call something by it's real name PROPAGATED instead of lying and saying it's SAFE?
[13:34:40] <Nodex> the very simple answer is dont use bulk inserts, they are NOT mandatory, they increase performance when loading data by trading off on write concerns
[13:36:12] <kwizzles> i used oracle a lot in business
[13:36:23] <Nodex> the article writer has blown it out of context claiming that everything is unsafe
[13:36:27] <kwizzles> but for $$ reasons i cant use it everywhere
[13:36:32] <msch> Nodex: but why did they call it SAFE. that's dishonest. that's like if I offer you a "working thing" and in the fineprint it says "oh yeah and what I call "working" actually means "broken""
[13:36:35] <kwizzles> i used postgres a bit too and was happy
[13:36:45] <Nodex> msch : it's only unsafe on BULK
[13:36:52] <msch> kwizzles: you will be happy with Postgres then. They take data integrity serious.
[13:37:01] <msch> Nodex: yes. which means it's not safe. how hard is this to understand?
[13:37:03] <Nodex> kwizzles : if you need transactions and RDI then dont use mongo
[13:37:07] <kwizzles> then they should call it REPLICA_SAFE_SOMETIMES
[13:37:24] <Nodex> "Bulk insert can significantly increase performance by amortizing write concern costs. In the drivers, you can configure write concern for batches rather than on a per-document level."
[13:37:46] <Nodex> IT ONLY HAPPPENS ON BULK INSERTS
[13:37:49] <dorong> what does the "maintenanceMode" item with a value of -3 mean inside the primary member of rs.status() ? (meaning, when running db.status(), the following key members[0].maintenanceMode equals -3)
[13:41:50] <msch> kwizzles: http://www.postgresql.org/docs/9.2/static/warm-standby.html#SYNCHRONOUS-REPLICATION explains Postgres replication. it works like you'd expect it to.
[13:42:02] <msch> kwizzles: also read about the WAL because that's different from Oracle afaik http://www.postgresql.org/docs/9.2/static/wal.html
[13:51:25] <algernon> right or not, the presentation wasn't entertaining
[13:51:36] <Nodex> both of those idiots were trolling
[13:51:49] <Nodex> I imagine msch is the author' friend
[13:52:52] <usestrict> do people seem to be generally unaware that using nosql-stuff, which essentially is fancy kv-stores, is different from using RDBMS?
[13:53:13] <Nodex> I can imagine that even db's with crazy high integrity one can still lose data when a machine crashes or w/e and nothing has been flushed to disk
[13:53:35] <Nodex> usestrict : people don't seem to get that point that they're 2 different tools
[13:55:42] <Nodex> sorry that's a lie, they have a relational grasp of sense LOL
[13:56:40] <Nodex> I imagine a few more trolls over the next few days while that article does the rounds of the interwebs. No doubt being replicated in a few Mongo shards / Replica sets along the way
[13:59:21] <Nodex> now the article makes sense. the author is one of the main devs for a rival database/store
[14:25:11] <Zelest> what about find_and_modify? is this feature "bad" in sense of performance?
[15:35:49] <Zelest> are integers in mongodb 32bit or 64bit by default?
[15:48:25] <dorong> what does the "maintenanceMode" item with a value of -3 mean inside the primary member of rs.status() ? (meaning, when running db.status(), the following key members[0].maintenanceMode equals -3)
[15:56:11] <Zelest> hmms, i've never used mongo that way, but can one use fields in the query? e.g, if I have date field and a integer, can i do something like find({date: {$gte: new Date(somefield_in_this_doc)}}) ?
[15:56:42] <Zelest> what I'm curious about is to have a "find things older than X minutes" where I wish to define X per doc
[16:03:51] <Zelest> and when you do, it doesn't take advantage of indexes, does it?
[16:04:03] <Nodex> I am not sure on that, I don't map/reduce
[16:05:00] <Nodex> it kinda would be a nice(ish) thing to be able to do tbh
[16:05:17] <Nodex> there are a lot of use cases for it
[16:06:42] <sander> When compiling mongoclient I want it to be written to a configuration folder (debug or release). The "--prefix=" flag doesn't work. Is this possible and if so, how could I approach this?
[17:02:28] <owen1> i want to add a compound index - app_id and user_id. i only have 3 different apps ids and many many user ids. does it mean the user_id should be the first index and the app_id second?
[17:04:53] <owen1> Nodex: "The order of fields in a compound index is very important. In the previous example, the index will contain references to documents sorted first by the values of the item field and, within each value of the item field, sorted by the values of location, and then sorted by values of the stock field."
[17:04:53] <Nodex> what you should be doing with indexes is creating them to make the best use of queries
[17:09:31] <Nodex> tbh I am not 100% sure what happens with indexes on secondaries
[17:09:49] <Nodex> I dont know if the exact params get passed over the wire and they all build together or what
[17:10:19] <Nodex> it would be more efficient to let the secondaries build first and shift reads / primary status to them on large indexes but I a not sure if that's built in
[17:14:05] <owen1> that's what i currently do (3 hosts) - 'mongodb://push.np.wc1.yellowpages.com,push2.np.wc1.yellowpages.com,push3.np.wc1.yellowpages.com/push'
[17:28:55] <owen1> Nodex: i'll read about the connect function. thanks!
[17:32:41] <barroca> Hello, I wasn't able to find the answer searching in the Interwebs, I have a list, with saleid, and I have to update those with the same saleid and insert the new ones. update function do the trick, but I wasn't able to create a simple criteria
[17:33:36] <barroca> I'm iterating over the list and updating like this: db.collection.update({'@id':saleid},sale,True)
[17:34:15] <Nodex> not sure I understand the question barroca
[17:36:28] <barroca> I'm iterating over the list and updating like this: db.collection.update({'@id':saleid},sale,True)
[17:38:30] <barroca> Nodex: thanks for the help, to update elements on a DB using the update fuction, if they were in a list, do I need to iterate over the elements of that list and use the field as the criteria?
[17:39:35] <owen1> Nodex: i think this should work: 'mongodb://push.np.wc1.yellowpages.com,push2.np.wc1.yellowpages.com,push3.np.wc1.yellowpages.com/push?readPreference=nearest'
[17:39:36] <barroca> Nodex: or do is there a better way on defining the criteria, let's that my list is [{saleid:1,data:DDD},{saleid:2,data:DDD}]
[17:39:44] <owen1> acccording to http://docs.mongodb.org/manual/reference/connection-string/#connections-standard-connection-string-format
[17:40:28] <barroca> Nodex: and I wish to add all the elements on this list, updating those on the DB that have the same saleid
[17:44:51] <owen1> db.foo.getIndexes() return empty array. do i have to manualy add index for _id?
[17:46:14] <Nodex> it should already be there owen
[17:46:47] <owen1> Nodex: yeah. it exist in one collection but not in the other
[17:47:06] <Nodex> barroca : so you want to update all documetns where data=DDD ?
[17:50:09] <Nodex> not sure it matters, it will eventualy be pushed
[17:52:40] <barroca> Nodex: might be, but I'd like to update all documents that have the same saleid that i have on the list. I've actually solve this using http://pastebin.com/FiMPqQ6X but I don't if the processing is worser than one liner that might exist to solve the problem.
[17:55:14] <Nodex> if you want to update multiple documents with the SAME data then you can do it in one line else you HAVE to itterate
[19:13:22] <kali> your $unwind and $group and $unwind is O(n2)
[19:13:54] <bobbytek> it also prevents me from reading stuff into the client
[19:14:15] <bobbytek> and the n here is very very small
[19:14:32] <bobbytek> for the children and sub-children I mean
[19:23:21] <Sharcho> [Ruby on Rails] I'm trying to setup errbit. Any idea why I get the error "db_name must be a string or symbol" when running from passenger, but not when running "rails server" with thin?
[21:28:22] <danshipper> hey guys I'm having a problem with Mongo's MMS (monitoring system). does anyone have experience with it?
[21:29:16] <danshipper> basically what's happening is the monitoring service is only picking up my database.sessions collection. none of the other connections are being picked up
[21:29:28] <danshipper> i have the same issue with newrelic, so I'm assuming it's some sort of mongo configuration issue
[21:39:23] <danshipper> does anyone have somewhere they could point me to read about how i could debug this?
[21:39:39] <danshipper> ive read through the docs and filed a bug report already