[00:19:18] <someprimetime> can someone help me figure out range queries for mongo/mongoose
[01:12:30] <ctp> hi folks. did anyone get mongodb installed from the official 10gen debian repo? That are my issues: http://stackoverflow.com/questions/17015719/mongodb-not-installable-from-10gens-debian-repository
[01:49:47] <ixti> am i correctly understand that concurrent map reduces should not affect each other?
[01:50:08] <ixti> even if they both runs against same collection
[01:54:16] <ruffyen> anyone know if you can do a query based distinct in pymongo?
[02:08:31] <ruffyen> disregard decided to go the route of using pythongs sets module
[07:46:58] <ctp> Hi all. Did anyone install MongoDB via 10gen's Debian repo successfully? http://superuser.com/questions/605772/mongodb-not-installable-from-10gens-debian-repository
[07:49:27] <Bartzy> What is a good naming convention for mongo hosts in a cluster ?
[07:49:29] <Bartzy> mongos, mongod's , mongo arbiters..? How do you name them?
[08:08:09] <[AD]Turbo> what is the default criteria of a general collection.find({}) ?
[08:10:33] <ron> on a non-sharded database, it would be the insertion order, iirc.
[10:18:31] <deepy> I need something resembling a cache and someone suggested mongoDB, I don't really care about the speed of inserting but lookups should be as fast as possible. My keys are Java long's. Is there anything I should read before I jump into this?
[10:19:04] <War2> you may want to look up Redis as well, if your data can fit in system's RAM.
[10:19:17] <War2> contrast/compare the two, is what I'd recommend.
[10:21:25] <mishoch> Hello people, does anyone know how to pass "global parameters" to the map function in mongoid 3? Mongoid 2 used to have a "scope" parameter that was given as a part of the settings in the map_reduce(mapfunc, reducefunc, settings) call
[10:24:34] <mishoch> @deepy - agreeing with War2 on this
[11:11:58] <deepy> Oh right, that reminds me. The lifespan of my entries is the lifespan of my application
[11:23:17] <War2> deepy: that's a "cache with fast lookups" you want, or are you looking at replacing functionality of the original data store with it?
[12:01:11] <deepy> War2: it'll store a serialized protobuf, a counter which is used internally and the key is also used
[13:10:15] <phrearch> im not sure. i just expected that when i set a subdocument's value and save the parent doc that it will store the changed values. it doesnt somehow.
[13:49:42] <jsteunou> (because i'm not fluent in english)
[13:49:53] <jsteunou> (so I search for my words to write correct sentences)
[13:50:28] <ron> jsteunou: still... take the time, write what you want and only then click enter. nobody will run away (more than they would anyways) - I promise.
[14:02:52] <jsteunou> I do not have the issue with find().snapshot().forEach(...)
[14:03:31] <jsteunou> It seems that an update inside the find().forEach(...) does put back the document into the cursor
[14:03:59] <jsteunou> not so weird but I can find any doc about it
[14:06:23] <jsteunou> http://docs.mongodb.org/manual/reference/method/cursor.snapshot/ tells "Queries with results of less than 1 megabyte are effectively implicitly snapshotted. "
[14:06:33] <jsteunou> maybe that's why I did not have this behavior before
[14:06:40] <jsteunou> also I change a date for a string
[14:07:09] <jsteunou> so maybe the hash / signature of the object change
[14:39:25] <bobinator60> when i $match on on two fields in an list of embedded documents, it seems that the document will match if each match element is found in different embedded documents. here's a gist in pseudo-json of what I'm talking about: https://gist.github.com/rbpasker/0f136a052c6fa983457b.
[14:39:47] <bobinator60> both documents in my example will match, but I only want #1
[16:55:20] <rhalff> I've tried this: require('mongoose').connect( 'mongodb://127.0.0.1/gedicht_nu' ); That's the whole script, and it hangs.
[16:55:53] <rhalff> but nodejs folks say that's the nature of node
[16:56:00] <idletom> struggling with this aggregation, i need to filter out results where id is null : https://gist.github.com/willhunting/ca18e86789ed2a45d778
[17:46:03] <b0c1> Can I calculate an update value with db update? example: db.collection.update( { field: value1 }, { $set: { field1: somefield + value2 } } ); ?
[20:32:24] <Glace> Hey guys, I seem to have constant heavy write iopsafter sharding using the hashed shard key. The balancer is turned off.. Any ideas what can be causing this...
[20:33:10] <Glace> Its hitting the maximum of my PIOPS in a constant fashion (2000).. we did not have this behavior before sharding.. I actually can handle less request right now.. but shard masters are being hit
[20:51:26] <n06> i.e if timeLockedMicros = 1000 and timeLockedAcquiring = 500 is the actual lock time 500?
[20:53:58] <n06> the docs dont seem too specific on the matter, its a bit vaguely worded, so i was hoping someone could help me
[21:02:50] <Guest87211> i'm using mongo with mongoid on a rails project, I just set up a standard 1-N embedded documents relationship. When i call <rails_model>.update_attributes(<new_data>)... it doesn't update the existing attributes on the document, instead it creates a new embedded document that contains the new attributes.. wtf. anyone encounter this?
[21:03:12] <Guest87211> oh, and that's when I edit a child document
[21:21:16] <Guest87211> basically it comes down to does anyone know what would cause mongoid to generate this query: update={"$set"=>{".brand"=>"lol"}} and yet in my rails model AND new data that i'm attempting to set "brand" is "brand" and not ".brand"
[21:46:36] <federated_life> Im getting a bunch of these error messages in monosniff ( *** Invalid IP header length: 0 bytes ) anyone see smth similar ?
[22:51:45] <nemothekid> If I have safe mode turned off, and issue a remove({"_id":x}) then an insert({"_id":x}), is there any reason why the insert shouldn't succeed if the remove succeeded?
[23:20:05] <nfx> hello, I was wondering if someone here could help me with disabling the mongo balancer mentioned here: http://docs.mongodb.org/manual/tutorial/manage-sharded-cluster-balancer/#sharding-balancing-disable-temporally
[23:21:12] <nfx> I'm able to execute the shell command successfully along with the underlying command it abastracts: "db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , true );"
[23:22:14] <nfx> I need to disable the balancer in code using pymongo. Right now I have something like this: