[02:25:36] <cipher__> is it unsafe to return a BSONObj by value for some reason?
[02:25:46] <cipher__> I'm seg faulting and unsure why
[03:36:31] <cipher__> is there any reason why a bsonobj, cDoc, used as cout << cDoc.obj().getStringField("X-Delay"); would seg-fault?
[03:53:59] <Pinkamena_D> how to update document with the latest date
[04:20:30] <lateradio> if I'm searching for a user with something like User.findOne({username: "me", email: "my@email.org" etc..., how would I specify that the findOne query only has to fulfil one of those conditions? ie. only one of the two have to match, not both
[05:05:37] <MongoNewb> I can't figure out where the expression is wrong though, and all the sub-expressions seem to work fine, so I guess there is some nuance for combining them?
[05:05:39] <Boomtime> anyway, last component pair should be a member, not an object
[05:06:11] <Boomtime> ok, if you ever need to test this, remember the expression must be a JSON object
[05:06:25] <Boomtime> you can construct it piecewise at the shell prompt
[05:08:07] <MongoNewb> ahhh I see where it was wrong, thanks very much for the help
[05:52:49] <MongoNewb> eek, now some trouble transferring the mongo query to python, seems $OR can't be used in PyMongo and no good stack overflow resutls come up
[06:40:43] <RaviTezu> Hi, I have a replica node in RECOVERING state from last 15 hours. Is there a way to check how long it may take to become SECONDARY?
[06:41:08] <RaviTezu> and how much data/oplog yet to be sync'ed/applied?
[06:41:49] <RaviTezu> Any help would be appreciated :)
[06:44:16] <joannac> RaviTezu: how big is the instance?
[06:44:29] <joannac> look in the logs of the syncing member?
[06:50:30] <RaviTezu> joannac: Log has [rsBackgroundSync] replSet not trying to sync from mongoshard4.com:27017, it is vetoed for 317 more seconds messages
[07:04:51] <RaviTezu> and one more question.. Do you think just cleaning the dbs and startup the instance works? or do you prefer rsycing the data manually and start the db?
[07:05:16] <joannac> clean the dbpath, start the instance, let it initial sync
[07:05:21] <kali> RaviTezu: rm everything and restart works great
[07:06:17] <joannac> Zelest: erm, not really. it requires 1 brief downtime
[07:06:34] <Zelest> yeah, but on 3 nodes.. and a prod environment :)
[07:06:46] <joannac> do it during a maintenance window
[07:20:00] <MongoNewb> if anybody is here this should be a simple question but I'm tired and don't see how to manuever JSON I guess (my first time using mongo/JSON)
[07:20:43] <MongoNewb> a .find() returns me a cursor object, I now want to search the thing that it returned, i.e. say I search all of my games for a certain thing, and now I get the cursor back
[07:21:03] <MongoNewb> I want to find a field within those games, i.e. gametime or something of the sort
[07:21:17] <MongoNewb> how do I get this from the result of the .find?
[07:29:30] <RaviTezu> and joannac: one more question.. is there a way to check the exact db size? I think mongodb pre-allocates the files..and so total file sizes != db size ?
[07:32:21] <joannac> MongoNewb: why don't you just project the field you want?
[07:36:40] <rspijker> MongoNewb: you can iterate over the returned documents. if you have the cursor that was returned, you can use while(cursor.hasNext()){var doc = cursor.next(); do something with doc}
[07:36:46] <rspijker> what is it you want to do exactly?
[07:39:17] <MongoNewb> joannac: I guess I'm not sure what project means, rspijker I will try npw
[07:41:34] <ajph> hey. is it reasonable to run with a high % write lock on a primary (>%130) for periods of hours? even rate-limiting my updates i can't get this figure down - although there is no write queue. all user-facing operations are reads and are done from the secondary which maintains a very low lock %
[08:09:41] <MongoNewb> another question, if I am sure a .find() returned only 1 result is there a more efficient way of access that than iteration (for c in results, etc.)
[08:30:26] <dorongutman> how do handle indexes on a soft-deleteable collection ?
[08:31:08] <dorongutman> since I only have the deletedAt field on documents that were “deleted”, how do I index for better performance on queries that get all the non-deleted documents ?
[08:31:18] <dorongutman> it’s like a reverse sparse index
[14:04:28] <Shakyj> hey, I have structure that looks like http://pastie.org/9483089 how would I create a map of anchor_text? so I have the anchor text and the number of occurences? Is it possible in mongodb?
[14:04:56] <saml> yes, you can do with aggregation Shakyj
[14:07:46] <Shakyj> saml: cheers, looking at the docs now
[16:24:03] <meadhikari> is there a web service where i can give my server details and view my collections and documents and perform operations in them?
[16:26:26] <EmmEight> I have a mongodb cluster, and I am connecting to it from another server. When running parallel connection streams on the same server (30+), the cluster will time out intermittently. Also the CPU load on the primary member is outrageously high. Any ideas? Do I just need to upgrade our primary server
[17:43:17] <Tinuviels> I have a problem with query consturct
[17:43:44] <Tinuviels> very simillar to this one: http://stackoverflow.com/questions/7811163/in-mongodb-how-do-i-find-documents-where-array-size-is-greater-than-1/15224544#15224544
[17:44:01] <Tinuviels> but instead of array I have subarray.
[17:45:03] <Tinuviels> so my final query should looks like this: db.accommodations.find( { $where: "this.main.name.length > 1" } );
[17:48:22] <Tinuviels> this is working: db.accommodations.find({'main.name.1': {$exists: true}}) but I need in fact all documents with less then 10 elements, so I can't use this one.
[17:49:14] <tscanausa> It might just not be possible
[17:50:04] <Tinuviels> seriously? That makes me sad...
[17:51:07] <Tinuviels> and what I'm doing now is one off.
[17:52:25] <Tinuviels> EmmEight I can throw away from python level documents with array bigger then 10, I just thought that putting this into query will be much faster
[17:53:02] <tscanausa> if it were possible, it would be but the data model does not support it
[17:53:50] <Derick> mongodb would simply have to read all the documents too...
[17:54:00] <Derick> so it's not really that different where yo ufilter it
[17:56:48] <tscanausa> Map reduce would defiantly allow it
[17:56:59] <Tinuviels> one with db.accommodations.find({'main. name.11': {$exists: false}}) and one without this condition in querry but with python "filter"
[18:16:48] <saml> https://gist.github.com/saml/f82a7430e3004c3fba36 how can I check distinct value of array.prop ?
[18:17:39] <kali> saml: aggregation framework should work, and way faster
[18:18:08] <saml> hrm do you have an example of how to do this?
[18:18:14] <saml> problem is _children is an array
[18:18:37] <kali> you'll need a $unwind, then a $match and then a $group
[18:19:03] <saml> db.docs.distinct('_children.type', {'_children._name': 'image'}) I want this.. but _children.type should be of that sub document in an array. not of entire array
[18:41:57] <mango_> the server is already part of a replica set.
[18:42:14] <saml> db.dropDatabase() ? or db.collectionName.remove({}) ?
[18:42:17] <kali> saml: i'm not sure it is a good idea. mongodb will not be able to query efficiently deeply nested structure. you would have to flatten the structure entirely, making other queries awkward
[18:42:24] <Derick> you need to remove it from the replicaset first
[18:43:07] <mango_> ok, so I'll remove from replica set, and then drop the databases
[18:54:25] <kali> remembertoforget: stuff will happen when the collection is big enough (64MB iirc)
[18:54:32] <saml> hahah so get 4 docs with level 10 depth
[18:55:20] <remembertoforget> kali: alright - thanks. The reason I ask is - i remember before when i just had 2 shards in the cluster and I created new collection, I used to see both the shards in the stats
[18:55:45] <kali> saml: you should seriously question mongodb ability to help you in the problem before commiting for a database replacement :)
[18:55:47] <remembertoforget> now, I added another shard to the cluster. I dropped the old collection and recreated it.
[18:56:03] <saml> they alerady started to rewrite CMS on top of mongodb
[18:56:09] <saml> i was given a task to migrate data
[20:54:27] <melvinrram> https://gist.github.com/melvinram/899cbd04af52ced4c9e4 I have a aggregate query that runs on 100k records. It runs fast. If I had a sort into the mix, it becomes painfully slow and doesn't return anything.
[20:55:07] <melvinrram> There is an index on created_at
[20:55:45] <melvinrram> Any ideas on what I might be doing wonky?
[20:55:56] <r1pp3rj4ck> not an expert or anything, but why is this an aggregate?
[20:56:44] <melvinrram> The next step for me is to group them
[20:57:10] <r1pp3rj4ck> in this case, as i said, i'm not an expert and really have no idea :/
[21:07:13] <prologic> Hi. Anyone shed any light on my problem described here: https://gist.github.com/therealprologic/c85697889f0f834180ce ?
[21:10:15] <melvinrram> prologic - took a look but I don't have any answer for you
[21:19:42] <prologic> nps :) I hope someone does/will
[21:19:49] <prologic> otherwise I"ll have to post it on stackoverflow :)
[21:42:49] <ngoyal> anybody ever store more than 1 trillion documents in a collection?
[23:36:42] <ZogDog> Looking for assistance with schema instance methods using Mongoose. This (http://jsfiddle.net/7g1sLehr/) isn't producing the onhand value I'm looking for. Any pointers?