PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 10th of June, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:19:18] <someprimetime> can someone help me figure out range queries for mongo/mongoose
[01:12:30] <ctp> hi folks. did anyone get mongodb installed from the official 10gen debian repo? That are my issues: http://stackoverflow.com/questions/17015719/mongodb-not-installable-from-10gens-debian-repository
[01:49:47] <ixti> am i correctly understand that concurrent map reduces should not affect each other?
[01:50:08] <ixti> even if they both runs against same collection
[01:54:16] <ruffyen> anyone know if you can do a query based distinct in pymongo?
[02:08:31] <ruffyen> disregard decided to go the route of using pythongs sets module
[02:10:56] <ixti> ruffyen: http://api.mongodb.org/python/1.4/api/pymongo/cursor.html#pymongo.cursor.Cursor.distinct
[02:11:24] <ruffyen> ixti: right so the answer to my question was no, no there is not a way with pymongo
[02:11:27] <ruffyen> many thanks
[02:11:44] <ixti> o_O
[02:12:09] <ixti> the answer was: here, take a look on that method, it's what you were looking for
[07:39:22] <[AD]Turbo> ciao all
[07:46:58] <ctp> Hi all. Did anyone install MongoDB via 10gen's Debian repo successfully? http://superuser.com/questions/605772/mongodb-not-installable-from-10gens-debian-repository
[07:49:27] <Bartzy> What is a good naming convention for mongo hosts in a cluster ?
[07:49:29] <Bartzy> mongos, mongod's , mongo arbiters..? How do you name them?
[08:08:09] <[AD]Turbo> what is the default criteria of a general collection.find({}) ?
[08:08:15] <[AD]Turbo> insertion order? timestamp?
[08:09:34] <ron> [AD]Turbo: 'natural oderding'
[08:09:44] <ron> ordering, even
[08:10:09] <[AD]Turbo> thx ron
[08:10:33] <ron> on a non-sharded database, it would be the insertion order, iirc.
[10:18:31] <deepy> I need something resembling a cache and someone suggested mongoDB, I don't really care about the speed of inserting but lookups should be as fast as possible. My keys are Java long's. Is there anything I should read before I jump into this?
[10:19:04] <War2> you may want to look up Redis as well, if your data can fit in system's RAM.
[10:19:17] <War2> contrast/compare the two, is what I'd recommend.
[10:21:25] <mishoch> Hello people, does anyone know how to pass "global parameters" to the map function in mongoid 3? Mongoid 2 used to have a "scope" parameter that was given as a part of the settings in the map_reduce(mapfunc, reducefunc, settings) call
[10:24:34] <mishoch> @deepy - agreeing with War2 on this
[10:25:59] <Zelest> deepy, hai!
[10:26:08] <Zelest> deepy, I would use MongoDB and TTL collections..
[10:26:22] <Zelest> or a capped collection
[10:26:50] <Nodex> I would use redis/memcached if you're simply after some caching
[10:35:49] <ron> deepy: you're here too? o_O
[11:07:26] <deepy> Zelest: o/
[11:07:32] <deepy> War2: the whole point is that I cannot store all my data in RAM
[11:07:49] <deepy> or depending on where I deploy I might not want to store it all in RAM or I might want to
[11:07:57] <deepy> ron: I'm everywhere :-)
[11:08:06] <ron> stop stalking me!
[11:08:21] <deepy> I have a mongoDB coffee mug, do you?
[11:08:46] <ron> I used to. Left it at my previous workplace.
[11:09:52] <deepy> Then I'm way more mongoDB than you, I have three
[11:10:24] <ron> but I don't use mongoDB at all, so nya.
[11:10:40] <deepy> Me neither, so nya nya to you.
[11:11:18] <ron> pfft.
[11:11:58] <deepy> Oh right, that reminds me. The lifespan of my entries is the lifespan of my application
[11:23:17] <War2> deepy: that's a "cache with fast lookups" you want, or are you looking at replacing functionality of the original data store with it?
[12:01:11] <deepy> War2: it'll store a serialized protobuf, a counter which is used internally and the key is also used
[12:01:50] <ron> protobuf. pfft.
[12:42:06] <fredix> hi
[12:42:14] <fredix> anyone use the C++ driver ?
[13:01:34] <phrearch> hi
[13:01:50] <phrearch> anyone an idea why this update code for a subdoc in mongoose doesnt actually save? http://paste.kde.org/767990/
[13:08:27] <deepy> How unique does the _id need to be?
[13:09:13] <phrearch> ehm its always unique right?
[13:09:24] <starfly> right
[13:10:15] <phrearch> im not sure. i just expected that when i set a subdocument's value and save the parent doc that it will store the changed values. it doesnt somehow.
[13:10:31] <phrearch> http://paste.kde.org/767996/
[13:10:35] <phrearch> this is the schema
[13:12:24] <phrearch> if i remove an element with splice or add one with push, then the doc is saved alright
[13:14:25] <deepy> I mean, does it need to be unique per the server or just per collection?
[13:15:01] <phrearch> ow just per collection
[13:15:18] <deepy> pheew
[13:15:51] <phrearch> im a bit new to mongodb and mongoose. still trying to figure out how this all works :)
[13:47:54] <jsteunou> hi there
[13:48:14] <jsteunou> I'm ni trouble with forEach / map on cursor
[13:48:22] <jsteunou> can someone help me?
[13:48:41] <jsteunou> I'm trying to write a migration script
[13:48:53] <jsteunou> that does update documents
[13:49:14] <ron> why do people
[13:49:17] <ron> have the tendency
[13:49:21] <ron> to ask a simple question
[13:49:24] <ron> in multiple lines?
[13:49:27] <jsteunou> so I write a function I call in a find().forEach
[13:49:32] <Zelest> because
[13:49:34] <Zelest> it's
[13:49:34] <Zelest> cool
[13:49:36] <Zelest> duuh!
[13:49:42] <jsteunou> (because i'm not fluent in english)
[13:49:53] <jsteunou> (so I search for my words to write correct sentences)
[13:50:28] <ron> jsteunou: still... take the time, write what you want and only then click enter. nobody will run away (more than they would anyways) - I promise.
[13:50:35] <jsteunou> I'm trying
[13:50:47] <jsteunou> sum it up
[13:51:05] <Nodex> dont listen to ron!
[13:51:25] <Nodex> face
[13:51:26] <Nodex> palm
[13:51:31] <Nodex> s
[13:51:37] <jsteunou> So I wrote an update / migration function, called in a find().forEach but I release it goes twice in this function
[13:51:40] <Zelest> it's remonvv fault either way..
[13:51:44] <Zelest> always is
[13:51:57] <jsteunou> Can it be possible that an update in a forEach put back the value in the cursor ?
[13:52:02] <Nodex> pastebin your code
[13:53:21] <Nodex> anyone selling some zero's?
[13:53:29] <Nodex> I have some ones I can trade for them
[13:54:16] <Zelest> Nodex, I got engaged at 010101 and got married 101010 :P
[13:54:18] <Zelest> true story
[13:54:40] <Zelest> not that it lasted very long after that though.. \o/
[13:54:47] <Zelest> </blog>
[13:55:15] <jsteunou> http://pastebin.com/Rn3hEvhE
[13:55:22] <Nodex> that's a pretty geeky thing to do Zelest LOL
[13:55:37] <Zelest> Nodex, my wife had a BSD beastie tattoo as well ;)
[13:55:53] <Zelest> Nodex, shame she grew up to become an asshole :S
[13:55:58] <Zelest> oh well, her loss. :)
[13:55:58] <Nodex> :/
[13:56:07] <Zelest> I've found a new lady who's awesome so..
[13:56:16] <Nodex> jsteunou : and you run that on the mongo SHell?
[13:56:41] <jsteunou> this is saved as a migration.js and run
[13:56:54] <jsteunou> I skipped the part where db vars are defined
[13:57:05] <phrearch> cant find any error in this update syntax, http://paste.kde.org/768020/
[13:57:10] <phrearch> any help? bit stuck on this
[13:57:23] <jsteunou> I just saw there is a cursor.snapshot() method
[13:57:36] <jsteunou> That might answer my issue
[13:57:42] <Nodex> {$set: {deadline: migration_update(document.deadline)} should that not be $set : {"document.deadline".....
[13:58:02] <Nodex> or is it not embedded ?
[14:00:19] <jsteunou> it is update({_id: doc._id}, {$set: {fieldName: value}})
[14:00:33] <jsteunou> if you read carefully
[14:01:03] <jsteunou> and I think it s the correct syntax
[14:01:45] <Nodex> I know what the syntax is
[14:01:54] <Nodex> I was asking if the field was EMBEDDED
[14:02:16] <jsteunou> no its not
[14:02:25] <jsteunou> it s a plain document
[14:02:52] <jsteunou> I do not have the issue with find().snapshot().forEach(...)
[14:03:31] <jsteunou> It seems that an update inside the find().forEach(...) does put back the document into the cursor
[14:03:59] <jsteunou> not so weird but I can find any doc about it
[14:06:23] <jsteunou> http://docs.mongodb.org/manual/reference/method/cursor.snapshot/ tells "Queries with results of less than 1 megabyte are effectively implicitly snapshotted. "
[14:06:33] <jsteunou> maybe that's why I did not have this behavior before
[14:06:40] <jsteunou> also I change a date for a string
[14:07:09] <jsteunou> so maybe the hash / signature of the object change
[14:22:50] <vargadanis> good day everyone!
[14:23:01] <vargadanis> or could it be better?
[14:24:51] <Nodex> best to just ask the question :)
[14:29:10] <vargadanis> Nodex, ohh I ain't got none :D
[14:30:32] <Nodex> ah - just being friendly!
[14:39:25] <bobinator60> when i $match on on two fields in an list of embedded documents, it seems that the document will match if each match element is found in different embedded documents. here's a gist in pseudo-json of what I'm talking about: https://gist.github.com/rbpasker/0f136a052c6fa983457b.
[14:39:47] <bobinator60> both documents in my example will match, but I only want #1
[14:45:34] <bobinator60> anyone?
[14:47:09] <whaley> bobinator60: that looks like expected behavior.
[14:47:45] <bobinator60> whaley: yes, i'm wondering if there's a way to get the behavior where only document #1 is returned
[14:48:33] <bobinator60> image 'a' and 'c' are 'age' and 'gender'
[14:50:17] <Nodex> try $elemMatch
[14:51:04] <pwelch> morning everyone. Is it normal to see a high load average on an idle mongodb instance?
[14:51:05] <Nodex> or you can match the entire array if you want foo : [{a:b},{c:d}]
[14:51:11] <pwelch> running on a m1.large ec2 instance store
[14:53:28] <bobinator60> Nodex: thanks!
[14:53:31] <bobinator60> i see.
[14:54:34] <bobinator60> Nodex: that's exactly what I needed.
[15:02:37] <bobinator60> Nodex: it seems that $elemMatch is only for find(), not for aggregations :(
[15:02:52] <bobinator60> Pipeline::run(): unrecognized pipeline op "$elemMatch
[15:04:24] <Nodex> sorry, I didn't realise you were aggregating
[15:04:30] <bobinator60> no worries
[15:04:36] <bobinator60> i wasn't clear
[15:04:51] <bobinator60> but it will clean up some other non-aggregation queries
[15:06:39] <Antiarc> Hey folks. What's the prescribed method to recover a RS member from a FATAL state? Just do a full rebuild?
[15:08:57] <burley-sf> If I fsync-lock a MongoDB instance, can I safely relocate the journal directory online?
[16:01:13] <deepy> How do I actually find out how long a query took?
[16:02:14] <deepy> db.getProfilingStatus() says { "was" : 2, "slowms" : 100 } but I'm not seeing any output
[16:05:18] <doxuanhuy> it's just settings value
[16:06:47] <doxuanhuy> output is located at system.profile table
[16:06:55] <doxuanhuy> collection i mean
[16:07:06] <doxuanhuy> db.system.profile.find().limit(10).sort( { ts : -1 } ).pretty()
[16:08:36] <deepy> cheers
[16:08:39] <deepy> millis: 0
[16:44:51] <astropriate_> Is hadoop-mongodb supported?
[16:48:19] <rhalff> any of you use mongoose ?
[16:48:33] <rhalff> for nodejs
[16:51:57] <astropriate_> rhalff, unfortunately yes :(
[16:55:20] <rhalff> I've tried this: require('mongoose').connect( 'mongodb://127.0.0.1/gedicht_nu' ); That's the whole script, and it hangs.
[16:55:53] <rhalff> but nodejs folks say that's the nature of node
[16:56:00] <idletom> struggling with this aggregation, i need to filter out results where id is null : https://gist.github.com/willhunting/ca18e86789ed2a45d778
[17:44:34] <b0c1> hi
[17:46:03] <b0c1> Can I calculate an update value with db update? example: db.collection.update( { field: value1 }, { $set: { field1: somefield + value2 } } ); ?
[17:47:21] <kali> b0c1: nope
[17:47:28] <b0c1> great... :|
[17:47:35] <b0c1> thnx
[17:48:26] <b0c1> so I need to get the document, calculate the value, and set...
[17:48:45] <kali> that's the idea
[17:50:11] <starfly> SQL envy
[17:51:35] <kali> grmbl... trying to tame feedly, and it suggests to buy "SQL for Dummies"
[17:51:40] <b0c1> not a bad idea exept mongo not support transactions..
[17:51:40] <kali> two insults for the price of one
[17:51:59] <starfly> LOL
[19:38:03] <Kzim> hi
[19:38:26] <ron> no
[19:38:38] <Kzim> anyone knows are to spam the queryModified: spam in the logs please ?
[19:38:45] <Kzim> i'm in 2.2.0
[19:40:35] <Kzim> back sry
[20:32:24] <Glace> Hey guys, I seem to have constant heavy write iopsafter sharding using the hashed shard key. The balancer is turned off.. Any ideas what can be causing this...
[20:33:10] <Glace> Its hitting the maximum of my PIOPS in a constant fashion (2000).. we did not have this behavior before sharding.. I actually can handle less request right now.. but shard masters are being hit
[20:33:26] <Glace> Im using 2.4.3
[20:33:32] <Glace> sorry 2.4.2
[20:50:52] <n06> when using the databse profiler, does the timeAcquiringMicros add time to the timeLockedMicros?
[20:51:11] <Glace> looking at it
[20:51:26] <n06> i.e if timeLockedMicros = 1000 and timeLockedAcquiring = 500 is the actual lock time 500?
[20:53:58] <n06> the docs dont seem too specific on the matter, its a bit vaguely worded, so i was hoping someone could help me
[21:02:50] <Guest87211> i'm using mongo with mongoid on a rails project, I just set up a standard 1-N embedded documents relationship. When i call <rails_model>.update_attributes(<new_data>)... it doesn't update the existing attributes on the document, instead it creates a new embedded document that contains the new attributes.. wtf. anyone encounter this?
[21:03:12] <Guest87211> oh, and that's when I edit a child document
[21:21:16] <Guest87211> basically it comes down to does anyone know what would cause mongoid to generate this query: update={"$set"=>{".brand"=>"lol"}} and yet in my rails model AND new data that i'm attempting to set "brand" is "brand" and not ".brand"
[21:46:36] <federated_life> Im getting a bunch of these error messages in monosniff ( *** Invalid IP header length: 0 bytes ) anyone see smth similar ?
[22:51:45] <nemothekid> If I have safe mode turned off, and issue a remove({"_id":x}) then an insert({"_id":x}), is there any reason why the insert shouldn't succeed if the remove succeeded?
[23:20:05] <nfx> hello, I was wondering if someone here could help me with disabling the mongo balancer mentioned here: http://docs.mongodb.org/manual/tutorial/manage-sharded-cluster-balancer/#sharding-balancing-disable-temporally
[23:21:12] <nfx> I'm able to execute the shell command successfully along with the underlying command it abastracts: "db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , true );"
[23:22:14] <nfx> I need to disable the balancer in code using pymongo. Right now I have something like this:
[23:22:18] <nfx> mongo_conn = pymongo.Connection('localhost', '27017') mongo_config_db = pymongo.database.Database(mongo_conn, 'config') mongo_config_db.command({'_id': "balancer"}, {'$set': {stopped: true}}, true)
[23:23:39] <nfx> but it throws an exception "name 'true' is not defined
[23:23:45] <nfx> any ideas?
[23:40:01] <nemothekid> nfx: change true to True
[23:40:17] <nfx> trying now...
[23:47:11] <nfx> now it raises the exception: "command {'_id': 'balancer'} failed: no such cmd: _id"
[23:47:26] <nfx> you were right about changing true to True though
[23:48:33] <nemothekid> you are using `mongo_config_db.command` when you should be using `mongo_config_db.update`