[03:31:07] <matheus> hi, I create a collection with capped:true but in my python app this error is happening : "pymongo.errors.OperationFailure: quota exceeded"
[03:37:35] <crudson1> matheus: your mongod is running with -quota and maybe -quotaFiles - it's trying to allocate more than allowed. this is per db, not on a collection level.
[03:42:04] <matheus> crudson1: I'm sorry to ask that, but do you have some suggestion to me?
[03:49:49] <crudson1> I haven't used quotas, but guessing some reasons: 1) you have either too many/too large objects in your capped collection for the quota 2) other objects in other collections that are using space 3) over time have used up the quota of data files that won't get reclaimed unless you do db.repairDatabase() (but that may fail if it can't allocate more, not sure) or drop it first and recreate DB
[03:50:57] <crudson1> I just did a test and hit the quota limit with the same message from the mongo console.
[03:51:22] <matheus> crudson1: I discovered the problem
[03:51:28] <crudson1> so 1: check what your limits are 2: adjust them if you can 3: change your database use if you can't
[03:52:22] <matheus> crudson1: the library is creating a new collection without my order and this new colections is fulling out all the quota
[03:56:50] <crudson1> matheus: the quota is per database. you've got sufficiently large data currently in there such that new files can't be allocated. you have to recover space, use a new db or nuke the current before you can insert more.
[03:57:06] <crudson1> matheus: do db.stats() to see how much it is using
[03:57:38] <matheus> crudson1: ok. I already know where is the problem. Thank you very much
[08:28:40] <Maro> Hi guys, would anyone be able to advise me on indexing for this query? http://pastebin.com/5JDeWYP3 Its generated by using mongo-hadoop, splitting on ipAddress and querying on 'when'. Have compound indexes on both "ipAddress,when" and "when,ipAddress" but the queries take a huge amount of time.
[08:40:40] <crudson1> Maro: what is the output of .explain() for that find?
[08:44:45] <Maro> crudson1, I can't seem to be able to generate an explain for the query. If I stick that json in a db.collection.find().explain() it says the "$query" operator is invalid
[08:45:07] <Maro> I think the way the query is being run by the mongo java driver is a bit different to a usual find...
[08:47:28] <crudson1> Maro: yeah it looks too nested - but without knowing too much about the frameworks I wouldn't be able to comment - can you run mongod with "--profile 2" or "--profile 1 --slowms 1" perhaps, which will show the query as issued to mongo
[08:47:55] <Maro> Okay, will try that. Just generating a new index atm so gotta wait. Thanks :)
[08:52:13] <crudson1> should be able to setProfilingLevel() from client, but I see java driver doesn't support that https://jira.mongodb.org/browse/JAVA-503
[08:52:33] <crudson1> so will have to do on the daemon process
[08:52:46] <Maro> I'm on the shell anyway, only using the java client for the map reduce application
[08:53:36] <crudson1> profile is system wide I think - so can set from shell and will apply to queries from a java client (reporting info as I search)
[08:53:57] <Maro> okay. Just waiting for this index still ^_^
[08:55:08] <crudson1> I am running a long process myself, so able to research this! I have certainly set it on mongod before many times for debugging, but never from a client; didn't know before that was possible.
[08:55:51] <agend> hi - I have a strange problem: query: db.clicks.findOne({'_id.k': 'd', '_id.u': '1', '_id.d': ISODate('2013-01-01 0:0')}) returns document, and query: db.clicks.findOne({'_id': {k: 'd', u: '1', d: ISODate('2013-01-01 0:0')}}) can't find anything, And the documents _id is { "u" : "1", "k" : "d", "d" : ISODate("2013-01-01T00:00:00Z") } - what is going on?
[08:57:00] <Maro> should it not be '_id' : ObjectId{<blah>}?
[09:04:49] <agend> difference is that the hour's docs was generated by me, while docs for days were made by mapreduce - does mongo add there something to keys I can't see???
[09:09:16] <agend> crudson: u were right - it is the ordering thing
[09:09:57] <Maro> crudson1, got profiling set, what's the best way to analyse this to see what's up with the query?
[09:10:56] <crudson1> Maro: run the bit of code that is not performing well, and look at the queries that mongod is reporting that it's executing, either mongod stdout or tailing log file if you have it redirected
[09:11:36] <crudson1> Maro: also, always good to run mongod with --notablescan, which will raise an error when you try and do a non-indexed query
[09:12:28] <crudson1> the latter can be a pain if you have a lot of small queries or ones that don't benefit from an index, but it is helpful when you want to be sure of indexes
[09:13:17] <agend> crudson1: is it possible to reorganize order of _id keys for all day doc's, what would it look like?
[09:18:25] <crudson1> agend: well using dot notation removes you from that worry, but you will need a different composite index on each _id attribute otherwise it will use a BasicCursor. You can: 1) get your app in order and only insert with correctly ordered attributes 2) use dot notation 3) restructure your documents
[09:21:18] <agend> yep, i've just noticed i can't - what can I do then - remove old doc's and create now ones ?
[09:21:45] <crudson1> please note that hashes are not arbitrarily ordered all the time. e.g. java client uses LinkedHashMap, ruby 1.9 hashes are ordered, 1.8 are not. Just safe to never assume it.
[09:23:03] <crudson1> kali: good point, delete existing and reinsert
[09:26:09] <crudson1> A general point: personally, I normally refactor when I see my _ids are becoming too complex, as tracking down issues can be a headache. Being able to pull out documents by a simple id has its benefits.
[09:46:39] <agend> so I just refactored my query for now, thanks guys
[09:56:06] <Maro> crudson1, doesn't appear that those map/reduce queries are being seen by the profiler :S
[09:56:18] <Maro> run MR job and nothing shows up in profile in the namespace. Run a regular query, shows up fine
[10:05:33] <crudson1> Maro: Internal mongodb map reduce functions source will show up in log, are you issuing more queries from a mapper or reducer (or finalizer)? Doing so has been restricted in recent versions, i.e. the db object is no longer available due to distributed support. Or is it the original query passed to mapReduce that is not showing?
[10:05:49] <crudson1> 3am so my brain is kind of fried, I hope that makes sense
[10:06:48] <Maro> I'm not using a map reduce function, the mongo-hadoop connector just sends db commands and the map reduce elements are done by hadoop....
[10:07:07] <Maro> sec, let me paste a full db.currentOp record
[10:09:01] <crudson1> Maro: what does db.getProfilingStatus() and db.getProfilingLevel() show?
[10:14:25] <crudson1> if you're not using internal mongodb mapreduce then I would assume that just normal mongodb queries apply, and as such should show up in the log...
[10:15:08] <Maro> only shows up the regular queries I've run
[10:15:45] <Maro> I'm gonnad ebug mongo=hadoop and see exactly what command its sending
[10:16:53] <himsin> hi! how does referenced relation in mongoid works?
[10:17:18] <himsin> I meant has_many and belongs_to
[10:20:48] <crudson1> himsin: consult mogoid docs, has_many and belongs_to will auto-create <attribute>_id attributes on the dependent-side of entities. They are just ObjectIDs that are dereferenced by the framework if my memory serves me right.
[10:22:16] <himsin> I mean if there are two tables User and Comment and I execute user.comments does it result in two separate queries since mongo does not support join operations
[10:24:23] <Maro> So it does a regular $query on the date using find, but then "addSpecial" $min and $max on the ipAddress...no idea what addSpecial is :|
[10:39:43] <Garo_> after deleting a bunch of documents with .remove({query}), will the new added documents be inserted into the gaps from where the deleted documents were?
[10:42:49] <Garo_> nvm. I found a faq entry which says that mongodb will reuse this space: "MongoDB maintains lists of empty records in data files when deleting documents and collections. MongoDB can reuse this space, but will never return this space to the operating system." source: http://docs.mongodb.org/manual/faq/storage/#faq-disk-size
[10:47:15] <pwned> I installed mongodb-10gen on ubuntu using the 10gen repository into a virtual machine managed by vagrant. When I enter db.insert({"x": "ıııı"}) I get ???? in mongo
[10:47:48] <pwned> is that how mongodb is supposed to work?
[10:47:52] <Garo_> pwned: you need to specify the collection. try this: db.test.insert({"foo" : "bar"})
[10:49:49] <pwned> I will see if it is about the locales
[10:49:51] <tinix> can anyone here assist with mongoengine for python?
[11:00:40] <stenz> hi, is it any way to count keys in nested hash? e.g. {_id:1, somehash:{a lot of keys:values}}, i need only get count(somehash.keys) in client
[11:03:07] <Maro> Hey, can anyone explain this result? I'm trying to search for records which are between two dates and am having oddness, I did this query and am a bit perplexed:
[11:42:14] <hyperboreean> hey, any chance I can do a find with a sort by tsUpdated in the following document structure: {'_id': '<my_unique_id>', 'orders': [{'value': 1, 'tsUpdated': '3 days ago'}, {'value':2, 'tsUpdated': 'now'}]}
[11:43:10] <hyperboreean> I'm interested in those documents that have a recent tsUpdated
[13:19:39] <Metzen> Hi, my mongos process is outputing a lot of logs like: "[Balancer] SyncClusterConnection connecting to [mydomaingoeshere.com:27019]". How can I supress this?
[13:20:06] <Metzen> I've already tried to use "quiet=true", but isn't working
[13:41:08] <grahamhar> is it cool to ask a pymongo question in here?
[13:41:19] <astro73|roam> i'm sure i have a few of those
[13:42:00] <grahamhar> trying to do a datetime query in the system.profile collection
[13:42:12] <grahamhar> want to get all slow queries in last 5 minutes
[13:44:15] <gimpy7489> I'm having a weird issue with PyMongo. It connects but then throws an exception which says 'ok'. What gives? http://pastie.org/private/pthermzgjyg7boh4lhnxma
[13:44:22] <astro73|roam> mongo server and client versions?
[13:44:50] <grahamhar> pymongo 2.5.2 and server 2.2.1
[13:58:28] <Derick> leifw: afaik, text search has been done in a similar way
[13:58:35] <Derick> but it's not something you can easily add to yourself
[14:29:28] <Maro> Does anyone have any experience using $max and $min with _addSpecial, when combining with a query in find()? The indexing seems to be all over the place,...I can't get any efficient queries :/
[14:39:11] <gimpy7489> Using PyMongo it throws the exception 'ok' on `pymongo.MongoClient(mongo_host)` but the mongodb logs shows it did connect. Any clue what may be wrong?
[14:41:55] <gimpy7489> system.profile doesn't show anything at all from that client....this is confusing. http://pastie.org/private/pthermzgjyg7boh4lhnxma#
[15:03:07] <jrbaldwin> can anyone help me with this issue? i don't know where to start looking for an answer... http://stackoverflow.com/questions/17111337/mongoddb-not-all-streaming-data-saving-to-collection-with-no-errors
[15:10:08] <Maro> jrbaldwin, have you looked at write concerns?
[15:10:33] <astro73|roam> also, what's the architecture of the backing mongo?
[15:11:22] <jrbaldwin> Maro i haven't looked into write concerns, haven't used those before – astro73|roam node running on same server as mongo (local machine at the moment)
[15:11:36] <jrbaldwin> will be deployed on linode later
[15:34:12] <saml> how come http interface 28017 returns text/plain, not application/json?
[16:02:08] <grahamhar> so my issue with system.profile was corruption, turned profiling off dropped collection and turned back on all good now
[16:07:27] <jimrippon> hi guys - I have a problem, we have a sharded db across three replica sets (with a primary, secondary and arbiter in each) - one of the replica sets appears to have developed a problem whereby one of the RS members is not being show properly in the shard config - the primary host no longer has a port number if I do db.shards.find()
[16:08:16] <jimrippon> As a result I'm getting errors from clients such as error = "can't find shard for: 192.168.100.147:27017";
[19:31:24] <petenerd> hi guys, i have a question. We have a replica set running on three servers in production. I need to move this replica set to 3 new servers. What's the best plan of attack? Replace 1 server at time or Add the 3 new servers to the replica set and then remove the 3 old servers?
[19:33:59] <petenerd> does anyone have any thoughts?
[20:56:21] <fluffypony> so two questions…firstly, what's the easiest way to grab the largest (or smallest) value for a particular field? the equivalent of "SELECT Value FROM Table ORDER BY Value DESC LIMIT 1". I'm using the aggregate framework, but my query seems…bulky
[20:58:07] <mediocretes> why not just find().sort().limit(1) ?
[21:01:51] <fluffypony> let's say I want to aggregate over a set of records based on a subquery - in SQL I'd do this: SELECT AVG(Field) FROM Table WHERE Timestamp IN (SELECT Timestamp FROM Table LIMIT 30)
[21:03:03] <mediocretes> I'm the wrong guy to ask about the agg framework :)
[21:03:37] <mediocretes> I wrote my own before the real one came out, and I'm too lazy to rewrite everything now
[21:24:37] <Richhh> pymongo 2.5.2 driver says compatible with Py 2.x where x >=4, got Py 2.7.3, installer fails, says Py 2.7 not found in registry, sup?
[21:33:50] <rhalff> Hi, one thing I don't understand, if there is a c++ lib available for mongodb, why is there also a javascript based module written for nodejs?
[21:34:10] <rhalff> Couldn't it just use the c++ libs?
[21:35:39] <rhalff> I notice there is a V8 header file in the scripting section, but to me it seems it's not complete, unfortunatly I'm not that great at c++ :)
[21:55:30] <bjori> rhalff: I don't get your question... are you asking why we have a dedicated nodejs driver?
[21:56:36] <bjori> Richhh: sounds like a bug.. I'd file a bug report including the steps you took, and the output, to install the driver - and if you installed py2.7 without using a distro package, I'd mention how it was installed
[22:00:06] <rhalff> bjori, from what I understand it's pretty easy to create a c++ addon for node which just exposes the library to javascript. So the question is why not just use that instead of writing javascript doing the same thing as the c++ code.
[22:01:33] <bjori> rhalff: its pretty easy to do that for virtually any language
[22:01:39] <rhalff> bjori, I believe this is an intention to do that: https://github.com/mongodb/mongo-perf/blob/master/mongo-cxx-driver/src/mongo/scripting/v8_db.h although I could be totally wrong
[22:01:43] <bjori> doesn't mean its the Right Thing, or most portable way
[22:02:06] <bjori> many things are much easier to express in javascript then building it ontop of a c++ interface which makes doing pretty javascript things compilcated
[22:02:39] <bjori> we don't have any plans of replacing our nodejs driver in the near future
[22:03:06] <bjori> we do however plan on drastically improving our c++ driver in the future :)
[22:03:40] <rhalff> bjori, ok It was just a question and a good answer :-)
[22:04:45] <astropirate> how can I create an empty array with the C++ driver
[22:05:09] <astropirate> When I use mongo::BSONArray() it creates an object {} instead of an array []
[22:09:59] <bjori> astropirate: (I don't know) are you sure? are you maybe just dumping your test example standalone? (the root element must be a document, not array, in bson - so it could be change automatically if its the root element?)
[22:11:24] <astropirate> bjori, Yup, i'm sure checked with the commandline client. And I am updating an existing document's array field
[22:12:03] <astropirate> so the document is { myField: [..........]} and i am using $set to empty that array so i have { myField: []} but instead it gives me {myField: {}}
[22:12:14] <bjori> that seems weird. I see there is also something called BSONArrayBuilder.. have you tried that?
[22:23:54] <bjori> its really not a normal driver.. its used by the kernel, mongos and the shell...
[22:24:14] <bjori> which makes it "interesting" :P
[22:29:24] <leifw> if you make an empty BSONArray it looks like {} because in bson, arrays are implemented as objects with numeric keys, so [a, b, c] is really {"0":a, "1":b, "2":c} underneath
[22:29:43] <leifw> an empty array and an empty object are the same bits
[22:32:51] <leifw> but yeah the right way to make an array is { BSONArrayBuilder b; b.append(x); ...; BSONArray a = b.arr(); }
[22:36:17] <leifw> if you want the array inside an object, then instead { BSONObjBuilder b; BSONArrayBuilder ab(b.subarrayStart("arrayName")); ab.append(x); ...; a.doneFast(); BSONObj o = b.obj(); }