PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 14th of June, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:31:07] <matheus> hi, I create a collection with capped:true but in my python app this error is happening : "pymongo.errors.OperationFailure: quota exceeded"
[03:37:35] <crudson1> matheus: your mongod is running with -quota and maybe -quotaFiles - it's trying to allocate more than allowed. this is per db, not on a collection level.
[03:42:04] <matheus> crudson1: I'm sorry to ask that, but do you have some suggestion to me?
[03:49:49] <crudson1> I haven't used quotas, but guessing some reasons: 1) you have either too many/too large objects in your capped collection for the quota 2) other objects in other collections that are using space 3) over time have used up the quota of data files that won't get reclaimed unless you do db.repairDatabase() (but that may fail if it can't allocate more, not sure) or drop it first and recreate DB
[03:50:57] <crudson1> I just did a test and hit the quota limit with the same message from the mongo console.
[03:51:09] <matheus> crudson1: thank you
[03:51:22] <matheus> crudson1: I discovered the problem
[03:51:28] <crudson1> so 1: check what your limits are 2: adjust them if you can 3: change your database use if you can't
[03:52:22] <matheus> crudson1: the library is creating a new collection without my order and this new colections is fulling out all the quota
[03:56:50] <crudson1> matheus: the quota is per database. you've got sufficiently large data currently in there such that new files can't be allocated. you have to recover space, use a new db or nuke the current before you can insert more.
[03:57:06] <crudson1> matheus: do db.stats() to see how much it is using
[03:57:38] <matheus> crudson1: ok. I already know where is the problem. Thank you very much
[04:11:56] <matheus> exit
[08:28:40] <Maro> Hi guys, would anyone be able to advise me on indexing for this query? http://pastebin.com/5JDeWYP3 Its generated by using mongo-hadoop, splitting on ipAddress and querying on 'when'. Have compound indexes on both "ipAddress,when" and "when,ipAddress" but the queries take a huge amount of time.
[08:40:40] <crudson1> Maro: what is the output of .explain() for that find?
[08:44:45] <Maro> crudson1, I can't seem to be able to generate an explain for the query. If I stick that json in a db.collection.find().explain() it says the "$query" operator is invalid
[08:45:07] <Maro> I think the way the query is being run by the mongo java driver is a bit different to a usual find...
[08:47:28] <crudson1> Maro: yeah it looks too nested - but without knowing too much about the frameworks I wouldn't be able to comment - can you run mongod with "--profile 2" or "--profile 1 --slowms 1" perhaps, which will show the query as issued to mongo
[08:47:55] <Maro> Okay, will try that. Just generating a new index atm so gotta wait. Thanks :)
[08:52:13] <crudson1> should be able to setProfilingLevel() from client, but I see java driver doesn't support that https://jira.mongodb.org/browse/JAVA-503
[08:52:33] <crudson1> so will have to do on the daemon process
[08:52:46] <Maro> I'm on the shell anyway, only using the java client for the map reduce application
[08:53:36] <crudson1> profile is system wide I think - so can set from shell and will apply to queries from a java client (reporting info as I search)
[08:53:57] <Maro> okay. Just waiting for this index still ^_^
[08:55:08] <crudson1> I am running a long process myself, so able to research this! I have certainly set it on mongod before many times for debugging, but never from a client; didn't know before that was possible.
[08:55:51] <agend> hi - I have a strange problem: query: db.clicks.findOne({'_id.k': 'd', '_id.u': '1', '_id.d': ISODate('2013-01-01 0:0')}) returns document, and query: db.clicks.findOne({'_id': {k: 'd', u: '1', d: ISODate('2013-01-01 0:0')}}) can't find anything, And the documents _id is { "u" : "1", "k" : "d", "d" : ISODate("2013-01-01T00:00:00Z") } - what is going on?
[08:57:00] <Maro> should it not be '_id' : ObjectId{<blah>}?
[08:57:06] <Maro> (could be wrong)
[08:57:19] <agend> let me check
[08:57:23] <Maro> ObjectId({blah})
[08:58:27] <agend> > db.clicks.findOne({'_id': ObjectId({k: 'd', u: '1', d: ISODate('2013-01-01 0:0')}) })
[08:58:27] <agend> Fri Jun 14 08:57:55.355 JavaScript execution failed: Error: invalid object id: length
[08:58:42] <agend> ?
[08:58:43] <crudson1> no, he is using a composite key, not ObjectID, which is fine.
[09:00:08] <agend> and the strange thing is that: db.clicks.findOne({'_id': {k: 'h', u: '1', d: ISODate('2013-01-01 0:0')}}) works
[09:00:50] <agend> i have clicks aggregated by hours, days, weeks in one collection
[09:01:46] <crudson1> agend: the ordering matters
[09:02:23] <agend> i dont change the order - just kind of document from h (hour) to d (day)
[09:02:51] <agend> and when I do straight find for days - i can see the doc
[09:03:15] <crudson1> http://pastie.org/8041951
[09:04:49] <agend> difference is that the hour's docs was generated by me, while docs for days were made by mapreduce - does mongo add there something to keys I can't see???
[09:09:16] <agend> crudson: u were right - it is the ordering thing
[09:09:38] <agend> crudson1: thanks
[09:09:42] <crudson1> agend: yeah - you've got to be careful with querying composite keys like that
[09:09:46] <crudson1> no probs
[09:09:57] <Maro> crudson1, got profiling set, what's the best way to analyse this to see what's up with the query?
[09:10:56] <crudson1> Maro: run the bit of code that is not performing well, and look at the queries that mongod is reporting that it's executing, either mongod stdout or tailing log file if you have it redirected
[09:11:36] <crudson1> Maro: also, always good to run mongod with --notablescan, which will raise an error when you try and do a non-indexed query
[09:12:28] <crudson1> the latter can be a pain if you have a lot of small queries or ones that don't benefit from an index, but it is helpful when you want to be sure of indexes
[09:13:17] <agend> crudson1: is it possible to reorganize order of _id keys for all day doc's, what would it look like?
[09:18:25] <crudson1> agend: well using dot notation removes you from that worry, but you will need a different composite index on each _id attribute otherwise it will use a BasicCursor. You can: 1) get your app in order and only insert with correctly ordered attributes 2) use dot notation 3) restructure your documents
[09:18:56] <agend> i'm trying the option 3
[09:19:20] <crudson1> immediately: 4) iterate over all your existing documents and fix the _ids
[09:20:09] <kali> you can't change the _id in place
[09:20:22] <kali> it will generate new documents
[09:20:28] <kali> so it can be a bit tricky
[09:20:55] <Maro> crudson1, thanks, tailing :)
[09:21:18] <agend> yep, i've just noticed i can't - what can I do then - remove old doc's and create now ones ?
[09:21:45] <crudson1> please note that hashes are not arbitrarily ordered all the time. e.g. java client uses LinkedHashMap, ruby 1.9 hashes are ordered, 1.8 are not. Just safe to never assume it.
[09:23:03] <crudson1> kali: good point, delete existing and reinsert
[09:26:09] <crudson1> A general point: personally, I normally refactor when I see my _ids are becoming too complex, as tracking down issues can be a headache. Being able to pull out documents by a simple id has its benefits.
[09:46:39] <agend> so I just refactored my query for now, thanks guys
[09:52:08] <crudson1> cool
[09:56:06] <Maro> crudson1, doesn't appear that those map/reduce queries are being seen by the profiler :S
[09:56:18] <Maro> run MR job and nothing shows up in profile in the namespace. Run a regular query, shows up fine
[10:05:33] <crudson1> Maro: Internal mongodb map reduce functions source will show up in log, are you issuing more queries from a mapper or reducer (or finalizer)? Doing so has been restricted in recent versions, i.e. the db object is no longer available due to distributed support. Or is it the original query passed to mapReduce that is not showing?
[10:05:49] <crudson1> 3am so my brain is kind of fried, I hope that makes sense
[10:06:48] <Maro> I'm not using a map reduce function, the mongo-hadoop connector just sends db commands and the map reduce elements are done by hadoop....
[10:07:07] <Maro> sec, let me paste a full db.currentOp record
[10:09:01] <crudson1> Maro: what does db.getProfilingStatus() and db.getProfilingLevel() show?
[10:09:45] <Maro> db.getProfilingStatus()
[10:09:45] <Maro> { "was" : 2, "slowms" : 100 }
[10:09:59] <Maro> 2 for level
[10:14:25] <crudson1> if you're not using internal mongodb mapreduce then I would assume that just normal mongodb queries apply, and as such should show up in the log...
[10:14:33] <Maro> I'd think so too! :D
[10:15:00] <Maro> db.system.profile.find({ns:'<database>.metrics'}).pretty()
[10:15:08] <Maro> only shows up the regular queries I've run
[10:15:45] <Maro> I'm gonnad ebug mongo=hadoop and see exactly what command its sending
[10:16:53] <himsin> hi! how does referenced relation in mongoid works?
[10:17:18] <himsin> I meant has_many and belongs_to
[10:20:48] <crudson1> himsin: consult mogoid docs, has_many and belongs_to will auto-create <attribute>_id attributes on the dependent-side of entities. They are just ObjectIDs that are dereferenced by the framework if my memory serves me right.
[10:22:16] <himsin> I mean if there are two tables User and Comment and I execute user.comments does it result in two separate queries since mongo does not support join operations
[10:22:24] <crudson1> yes
[10:24:23] <Maro> So it does a regular $query on the date using find, but then "addSpecial" $min and $max on the ipAddress...no idea what addSpecial is :|
[10:25:02] <Maro> http://docs.mongodb.org/manual/reference/operator/query-modifier/
[10:25:03] <Maro> ^_^
[10:30:14] <crudson1> use embeds_ if you prefer
[10:30:23] <himsin> crudson1: ok, thanks!
[10:39:43] <Garo_> after deleting a bunch of documents with .remove({query}), will the new added documents be inserted into the gaps from where the deleted documents were?
[10:42:49] <Garo_> nvm. I found a faq entry which says that mongodb will reuse this space: "MongoDB maintains lists of empty records in data files when deleting documents and collections. MongoDB can reuse this space, but will never return this space to the operating system." source: http://docs.mongodb.org/manual/faq/storage/#faq-disk-size
[10:47:15] <pwned> I installed mongodb-10gen on ubuntu using the 10gen repository into a virtual machine managed by vagrant. When I enter db.insert({"x": "ıııı"}) I get ???? in mongo
[10:47:48] <pwned> is that how mongodb is supposed to work?
[10:47:52] <Garo_> pwned: you need to specify the collection. try this: db.test.insert({"foo" : "bar"})
[10:48:04] <Nodex> are the chars utf-8 ?
[10:48:10] <pwned> Garo_: I forgot to type it in here on irc
[10:48:17] <pwned> Nodex: yes
[10:48:42] <pwned> outside of virtual machine I install mongodb and it works fine
[10:49:02] <pwned> something weird happened there
[10:49:14] <Nodex> I would say it's your VM then
[10:49:49] <pwned> I will see if it is about the locales
[10:49:51] <tinix> can anyone here assist with mongoengine for python?
[11:00:40] <stenz> hi, is it any way to count keys in nested hash? e.g. {_id:1, somehash:{a lot of keys:values}}, i need only get count(somehash.keys) in client
[11:03:07] <Maro> Hey, can anyone explain this result? I'm trying to search for records which are between two dates and am having oddness, I did this query and am a bit perplexed:
[11:03:13] <Maro> db.metrics.findOne({'when':{'$gt':ISODate('2013-01-03')},'when':{'$lt':ISODate('2013-01-02')}}).when
[11:03:13] <Maro> ISODate("2013-01-01T23:59:56Z")
[11:03:39] <Maro> should the results not be results which are BOTH gt and lt, which should be no results
[11:05:53] <Maro> if I stick them in $and then its correctly null, but I thought comma separated queries were the equivalent of $and?
[11:15:07] <kali> Maro: try db.metrics.findOne({'when':{'$gt':ISODate('2013-01-03'), '$lt':ISODate('2013-01-02')}}).when
[11:15:34] <Maro> yeah that works :|
[11:15:54] <kali> Maro: usual hash semantics is... only one value for one given key :)
[11:15:59] <Maro> Yeah that makes sense :X
[11:16:00] <Maro> damnit
[11:23:35] <Maro> damnit#
[11:23:42] <Maro> oops
[11:42:14] <hyperboreean> hey, any chance I can do a find with a sort by tsUpdated in the following document structure: {'_id': '<my_unique_id>', 'orders': [{'value': 1, 'tsUpdated': '3 days ago'}, {'value':2, 'tsUpdated': 'now'}]}
[11:43:10] <hyperboreean> I'm interested in those documents that have a recent tsUpdated
[11:44:40] <Nodex> sort({"orders.tsUpdated":1});
[13:19:39] <Metzen> Hi, my mongos process is outputing a lot of logs like: "[Balancer] SyncClusterConnection connecting to [mydomaingoeshere.com:27019]". How can I supress this?
[13:20:06] <Metzen> I've already tried to use "quiet=true", but isn't working
[13:41:08] <grahamhar> is it cool to ask a pymongo question in here?
[13:41:19] <astro73|roam> i'm sure i have a few of those
[13:42:00] <grahamhar> trying to do a datetime query in the system.profile collection
[13:42:12] <grahamhar> want to get all slow queries in last 5 minutes
[13:42:59] <grahamhar> count = node_connection[db].system.profile.find({"ts": {"$gt": delta}}).count()
[13:43:04] <grahamhar> delta is:
[13:43:17] <grahamhar> delta = datetime.utcnow() - timedelta(minutes=5)
[13:43:27] <astro73|roam> sounds right
[13:43:47] <grahamhar> yup and worked local on macosx
[13:43:54] <astro73|roam> i don't know enough about the system.profile
[13:43:56] <grahamhar> on linux and solaris I am getting
[13:43:57] <grahamhar> pymongo.errors.OperationFailure: command SON([('count', u'system.profile'), ('fields', None), ('query', {'ts': {'$gt': datetime.datetime(2013, 6, 14, 13, 22, 55, 203467)}})]) failed: 10320 BSONElement: bad type -122
[13:44:15] <gimpy7489> I'm having a weird issue with PyMongo. It connects but then throws an exception which says 'ok'. What gives? http://pastie.org/private/pthermzgjyg7boh4lhnxma
[13:44:22] <astro73|roam> mongo server and client versions?
[13:44:50] <grahamhar> pymongo 2.5.2 and server 2.2.1
[13:58:28] <Derick> leifw: afaik, text search has been done in a similar way
[13:58:35] <Derick> but it's not something you can easily add to yourself
[13:59:03] <leifw> ok
[14:29:28] <Maro> Does anyone have any experience using $max and $min with _addSpecial, when combining with a query in find()? The indexing seems to be all over the place,...I can't get any efficient queries :/
[14:39:11] <gimpy7489> Using PyMongo it throws the exception 'ok' on `pymongo.MongoClient(mongo_host)` but the mongodb logs shows it did connect. Any clue what may be wrong?
[14:41:55] <gimpy7489> system.profile doesn't show anything at all from that client....this is confusing. http://pastie.org/private/pthermzgjyg7boh4lhnxma#
[14:42:38] <saml> hey
[14:42:43] <saml> how do I provide filesystem like interface?
[14:43:02] <astro73|roam> saml: you mean gridfs?
[14:43:11] <saml> _id is path like "2013/03/hola.jpg" , "a.jpg", "bleh/bleh"
[14:43:15] <saml> no in plain mongodb
[14:43:29] <saml> i want to query
[14:43:34] <saml> let me think
[14:44:16] <saml> let's say you have docs: "a.jpg", "a/a.jpg", "a/b.jpg"
[14:44:28] <saml> and I want to get ["a.jpg", "a/"]
[14:44:47] <saml> "a/a.jpg" and "a/b.jpg" becomes "a/"
[14:44:56] <saml> not sure what's good query
[14:45:22] <astro73|roam> doesn't mongo support prefix/range queries?
[14:45:32] <saml> you mean $regex ?
[14:45:34] <saml> match query?
[14:45:52] <astro73|roam> no, that would be expensive, unless it has a really, really good regex engine
[14:46:05] <astro73|roam> i mean just a straight up prefix query
[14:46:50] <saml> don't see useful from google
[14:46:57] <astro73|roam> i guess you could do a range query
[14:47:46] <saml> maybe i shouldn't provide filesystem-like interface
[14:48:00] <astro73|roam> 'a/' <= path < 'a0'
[14:48:21] <astro73|roam> the other option is to do it OOP-style, and have parent references
[14:48:42] <saml> how does parent help?
[14:49:03] <saml> {"_id": "a/a.jpg", "parent": "a/"}
[14:49:17] <astro73|roam> no, {"filename": "a.jpg", "parent": DBRef}
[14:49:17] <saml> db.coll.find({"parent": "a/"})
[14:49:44] <astro73|roam> if your API is OO, not POSIX, then you have to navigate the tree anyways
[14:49:52] <saml> oh actually build tree
[14:50:31] <saml> but i need fast db.coll.find({"_id": "some/image/url.jpg"})
[14:50:31] <saml> if i have hierarchy, can be slow to query
[14:50:33] <saml> unless i create another collection
[14:50:43] <saml> and sync the tree and flat urls
[14:50:46] <astro73|roam> or a computed key, yes
[14:50:57] <astro73|roam> so you're left with range queries
[14:51:08] <saml> i don't get range query
[14:51:15] <astro73|roam> $lte $gt
[14:51:21] <saml> 'a/' <= path < 'a0' what does that mean?
[14:51:38] <astro73|roam> if you sort your strings
[14:52:21] <astro73|roam> all the items in directory 'a' will have names that come after 'a/' but 'a0' ('0' comes immediately after '/')
[14:52:36] <astro73|roam> *but before 'a0'
[14:52:52] <Nodex> why don't you just regex search your paths?
[14:53:22] <astro73|roam> performance? does mongo have index-aware regex?
[14:53:24] <saml> let's say current directory is "a/", give me all files and directories under a/ , not recursively
[14:53:31] <saml> on that first level only
[14:53:40] <Nodex> astro73|roam : yes of course it does
[14:54:12] <astro73|roam> sorry, that didn't sound like an "of course" question
[14:54:31] <astro73|roam> i mean, regexes will work, but i have no idea how they perform in mongo
[14:54:32] <Nodex> I don;t know what that means sorry
[14:54:46] <Nodex> as long as you prefix your query they run fine
[14:55:01] <saml> db.Images.find({_id: {$regex: /^2013\// }})
[14:55:11] <saml> this gives me all images stored under 2013/
[14:55:12] <Nodex> that will hit an index
[14:55:27] <saml> it'll return things like 2013/05/some.jpg
[14:55:34] <saml> when i really want is 2013/05/
[14:55:43] <saml> immediate child of 2013/
[14:55:47] <Nodex> 2013/05 is not a file
[14:55:55] <saml> i know
[14:56:05] <saml> but that's what i need to provide to users
[14:56:15] <saml> i think S3 browsers do something like that
[14:56:28] <Nodex> s3 propably store the immediate parent
[14:56:57] <saml> so let's say i store immediate parent in each doc
[14:57:12] <Nodex> name:"foo.jpg","path","f/b/foo.jpg","parent":"f/b/"
[14:57:15] <kali> s3 has no actually hierarchy
[14:57:25] <kali> s/ly//
[14:57:29] <Nodex> you can imitate heirachy though ;)
[14:57:39] <kali> yeah, but it's emulated at query time
[14:58:24] <saml> db.Images.find({path: {$regex: /^f\/b\/[^/]+$/})
[14:58:38] <saml> then i do group by parent?
[14:59:01] <Nodex> why would you... if you want all the images in a directory just query on the parent
[14:59:32] <saml> i want all images and directories under parent
[14:59:43] <Nodex> and?
[14:59:54] <saml> i don't store directories as doc
[14:59:58] <saml> i only store images
[15:00:01] <saml> i mean, files
[15:00:07] <Nodex> bit of bad luck you're having then
[15:00:09] <saml> so directories are concept
[15:03:07] <jrbaldwin> can anyone help me with this issue? i don't know where to start looking for an answer... http://stackoverflow.com/questions/17111337/mongoddb-not-all-streaming-data-saving-to-collection-with-no-errors
[15:10:08] <Maro> jrbaldwin, have you looked at write concerns?
[15:10:33] <astro73|roam> also, what's the architecture of the backing mongo?
[15:11:22] <jrbaldwin> Maro i haven't looked into write concerns, haven't used those before – astro73|roam node running on same server as mongo (local machine at the moment)
[15:11:36] <jrbaldwin> will be deployed on linode later
[15:16:38] <jrbaldwin> Maro: write concern result: E11000 duplicate key error index: tweettest.tweets.$_id_ dup key: { : ObjectId('51bb3392e6f17b0000000001') }
[15:17:11] <jrbaldwin> that seems to be the issue, why wouldn't mongo auto gen a new key for each write?
[15:17:49] <Derick> jrbaldwin: mongodb doesn't modify your documents...
[15:17:56] <Derick> don't see an ObjectID and one will be generated
[15:22:48] <astro73|roam> the driver generates the object ids, right?
[15:24:37] <Derick> yes
[15:24:54] <Derick> but the server will too if the driver didn't (for some reason)
[15:25:36] <jrbaldwin> i'm not writing an ObjectID, but for some reason it is still writing with duplicate IDs..
[15:25:52] <jrbaldwin> er the server is
[15:26:01] <Maro> Check the driver docs?
[15:28:00] <Maro> http://mongoosejs.com/docs/guide.html#_id
[15:28:08] <Maro> Indicates there's an option to stop the driver generating the ids
[15:29:24] <jrbaldwin> Maro: aweosme, that toally worked, thank you!
[15:29:44] <jrbaldwin> disabled mongoose id gen
[15:29:44] <Maro> no worries
[15:34:12] <saml> how come http interface 28017 returns text/plain, not application/json?
[16:02:08] <grahamhar> so my issue with system.profile was corruption, turned profiling off dropped collection and turned back on all good now
[16:07:27] <jimrippon> hi guys - I have a problem, we have a sharded db across three replica sets (with a primary, secondary and arbiter in each) - one of the replica sets appears to have developed a problem whereby one of the RS members is not being show properly in the shard config - the primary host no longer has a port number if I do db.shards.find()
[16:08:16] <jimrippon> As a result I'm getting errors from clients such as error = "can't find shard for: 192.168.100.147:27017";
[16:08:42] <jimrippon> relevant db.shards.find() output: { "_id" : "shard0004", "host" : "shardprimary/192.168.100.147,192.168.100.148:27017" }
[16:09:04] <jimrippon> all other shards are fine, and have both data-container members listed with ip and port number
[16:11:50] <jimrippon> I'm not a mongo admin, so early days learning this - I don't know where to start, any pointers?
[16:18:48] <saml> http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html so no way to do this in mongodb?
[16:19:09] <saml> probably need to patch mongodb
[16:19:23] <astro73|roam> not automatically
[16:19:28] <saml> or.. it doesn't make sense to support prefix query in mongodb actually
[16:19:37] <saml> it's just my usecase
[16:19:45] <astro73|roam> could do it with regexes or range queries, as we discussed before
[16:20:09] <saml> i don't think so
[16:20:20] <saml> it'll return the entire tree
[16:20:42] <astro73|roam> a smart regex won't
[16:21:00] <astro73|roam> or with richer data
[16:22:40] <saml> smart regex won't fix. but richer data will
[16:22:48] <saml> directories aren't stored
[16:22:55] <saml> i don't want to store directories
[16:23:03] <saml> so richer data means... another collection
[16:23:28] <saml> db.Images, db.Folders. and images will be {_id: "f/b/foo.jpg", dir: DBRef} referncing folders
[16:23:43] <saml> many images referencing one folder
[16:23:56] <saml> wait..
[16:24:08] <saml> that won't help still
[16:24:36] <astro73|roam> you could inline it if you wanted
[16:24:46] <astro73|roam> actually, inlining it would be better
[16:25:13] <astro73|roam> unless your operations often dictate needing stat() but not read() or visa-versa
[16:25:53] <jimrippon> how safe is it to manually update the configdb for a sharded replica set - could I just change it?
[16:54:12] <chris2390> hey, what would be the correct version of the c-client to use with the mognodb 2.4.4
[18:13:32] <lingz> Is searching by an array of id's faster than searching by a top level property?
[18:22:54] <kali> if there is an index, it's about the same
[18:26:50] <gimpy7489> How do I update a subdocument in Pymongo? This example would wipe out "filesystems": http://pastie.org/8043562
[18:28:05] <Nodex> filesystems.mount_point
[18:29:57] <gimpy7489> Nodex: Thanks
[19:31:24] <petenerd> hi guys, i have a question. We have a replica set running on three servers in production. I need to move this replica set to 3 new servers. What's the best plan of attack? Replace 1 server at time or Add the 3 new servers to the replica set and then remove the 3 old servers?
[19:33:59] <petenerd> does anyone have any thoughts?
[20:56:21] <fluffypony> so two questions…firstly, what's the easiest way to grab the largest (or smallest) value for a particular field? the equivalent of "SELECT Value FROM Table ORDER BY Value DESC LIMIT 1". I'm using the aggregate framework, but my query seems…bulky
[20:58:07] <mediocretes> why not just find().sort().limit(1) ?
[20:58:16] <fluffypony> *sheepish*
[20:58:21] <fluffypony> well that seems a lot simpler
[20:58:37] <mediocretes> you brought a gun to a knife fight
[20:58:43] <fluffypony> clearly
[20:58:50] <fluffypony> http://pastebin.com/W9dyzkiy
[20:58:53] <fluffypony> that's how I was doing it
[21:00:22] <fluffypony> ok so next question
[21:01:51] <fluffypony> let's say I want to aggregate over a set of records based on a subquery - in SQL I'd do this: SELECT AVG(Field) FROM Table WHERE Timestamp IN (SELECT Timestamp FROM Table LIMIT 30)
[21:03:03] <mediocretes> I'm the wrong guy to ask about the agg framework :)
[21:03:18] <fluffypony> heh
[21:03:30] <fluffypony> but thank you for the pointer above
[21:03:36] <fluffypony> much simpler
[21:03:37] <mediocretes> I wrote my own before the real one came out, and I'm too lazy to rewrite everything now
[21:24:37] <Richhh> pymongo 2.5.2 driver says compatible with Py 2.x where x >=4, got Py 2.7.3, installer fails, says Py 2.7 not found in registry, sup?
[21:33:50] <rhalff> Hi, one thing I don't understand, if there is a c++ lib available for mongodb, why is there also a javascript based module written for nodejs?
[21:34:10] <rhalff> Couldn't it just use the c++ libs?
[21:35:39] <rhalff> I notice there is a V8 header file in the scripting section, but to me it seems it's not complete, unfortunatly I'm not that great at c++ :)
[21:55:30] <bjori> rhalff: I don't get your question... are you asking why we have a dedicated nodejs driver?
[21:56:36] <bjori> Richhh: sounds like a bug.. I'd file a bug report including the steps you took, and the output, to install the driver - and if you installed py2.7 without using a distro package, I'd mention how it was installed
[22:00:06] <rhalff> bjori, from what I understand it's pretty easy to create a c++ addon for node which just exposes the library to javascript. So the question is why not just use that instead of writing javascript doing the same thing as the c++ code.
[22:01:33] <bjori> rhalff: its pretty easy to do that for virtually any language
[22:01:39] <rhalff> bjori, I believe this is an intention to do that: https://github.com/mongodb/mongo-perf/blob/master/mongo-cxx-driver/src/mongo/scripting/v8_db.h although I could be totally wrong
[22:01:43] <bjori> doesn't mean its the Right Thing, or most portable way
[22:02:06] <rhalff> hm ok
[22:02:06] <bjori> many things are much easier to express in javascript then building it ontop of a c++ interface which makes doing pretty javascript things compilcated
[22:02:39] <bjori> we don't have any plans of replacing our nodejs driver in the near future
[22:03:06] <bjori> we do however plan on drastically improving our c++ driver in the future :)
[22:03:40] <rhalff> bjori, ok It was just a question and a good answer :-)
[22:04:45] <astropirate> how can I create an empty array with the C++ driver
[22:05:09] <astropirate> When I use mongo::BSONArray() it creates an object {} instead of an array []
[22:09:59] <bjori> astropirate: (I don't know) are you sure? are you maybe just dumping your test example standalone? (the root element must be a document, not array, in bson - so it could be change automatically if its the root element?)
[22:11:24] <astropirate> bjori, Yup, i'm sure checked with the commandline client. And I am updating an existing document's array field
[22:12:03] <astropirate> so the document is { myField: [..........]} and i am using $set to empty that array so i have { myField: []} but instead it gives me {myField: {}}
[22:12:14] <bjori> that seems weird. I see there is also something called BSONArrayBuilder.. have you tried that?
[22:12:27] <astropirate> no, that I have not
[22:14:00] <bjori> I've honestly no clue, just grepping the source the BSONArrayBuilder seems to be used over BSONArray
[22:19:56] <astropirate> thank you bjori
[22:20:11] <astropirate> mongo::BSONArrayBuilder().arr() worked for me
[22:20:12] <astropirate> lol
[22:20:22] <astropirate> the C++ API is SOOO inconsistent
[22:23:07] <bjori> so I have heard =)
[22:23:54] <bjori> its really not a normal driver.. its used by the kernel, mongos and the shell...
[22:24:14] <bjori> which makes it "interesting" :P
[22:29:24] <leifw> if you make an empty BSONArray it looks like {} because in bson, arrays are implemented as objects with numeric keys, so [a, b, c] is really {"0":a, "1":b, "2":c} underneath
[22:29:43] <leifw> an empty array and an empty object are the same bits
[22:32:51] <leifw> but yeah the right way to make an array is { BSONArrayBuilder b; b.append(x); ...; BSONArray a = b.arr(); }
[22:36:17] <leifw> if you want the array inside an object, then instead { BSONObjBuilder b; BSONArrayBuilder ab(b.subarrayStart("arrayName")); ab.append(x); ...; a.doneFast(); BSONObj o = b.obj(); }
[22:43:53] <astropirate> thank you friends
[22:43:55] <astropirate> i'm off
[22:43:57] <astropirate> peace
[23:57:14] <jblack> It seems that moingoid does not validate embeds_many ?