PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 18th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:33:31] <cipher__> mongo::Date_t takes an argument since the epoch in ms?
[00:45:14] <clarity_> hey, anyone awake?
[00:54:03] <joannac> yes
[00:55:14] <clarity_> hey, is mongodb development still open for the public
[00:55:16] <clarity_> ?
[00:56:23] <joannac> I'm not sure what that means
[00:56:32] <joannac> the core code is still open source
[00:56:33] <clarity_> well, what i mean is can i contribute
[00:56:52] <clarity_> i remember a few years ago, there was some sort of forum for developers
[00:56:55] <joannac> you can submit pull requests
[00:57:23] <joannac> I don't know of any forum at the moment
[00:57:29] <clarity_> okay
[00:57:35] <clarity_> i think i want to contribute
[00:57:41] <clarity_> what would be the best way?
[00:59:31] <joannac> http://www.mongodb.org/about/contributors/
[01:00:47] <clarity_> thanks
[02:25:36] <cipher__> is it unsafe to return a BSONObj by value for some reason?
[02:25:46] <cipher__> I'm seg faulting and unsure why
[03:36:31] <cipher__> is there any reason why a bsonobj, cDoc, used as cout << cDoc.obj().getStringField("X-Delay"); would seg-fault?
[03:53:59] <Pinkamena_D> how to update document with the latest date
[04:20:30] <lateradio> if I'm searching for a user with something like User.findOne({username: "me", email: "my@email.org" etc..., how would I specify that the findOne query only has to fulfil one of those conditions? ie. only one of the two have to match, not both
[04:26:07] <joannac> $or
[04:26:39] <lateradio> thanks joannac
[04:51:33] <MongoNewb> I'm having trouble doing a query with (X OR Y) AND Z, I'm currently doing it as
[04:51:34] <MongoNewb> db.title.find({$or:[{EXPRESSION1},{EXPRESSION2}],{EXPRESSION3}})
[04:51:43] <MongoNewb> where 1 & 2 are my OR, and 3 should be the AND
[04:52:47] <joannac> okay
[04:52:51] <joannac> and that doesn't work?
[04:52:58] <MongoNewb> it doesn't seem to, no
[04:53:21] <joannac> what do you get?
[04:53:31] <MongoNewb> if I do X OR Y it shows 82 results, if I do X & Z it shows 41 results, but if I do (X OR Y) & Z it gives 0
[04:53:32] <joannac> error? unexpected results?
[04:53:56] <Boomtime> show your query
[04:54:23] <MongoNewb> or wait sorry maybe I am getting unexpected token now, ok 1 second
[04:54:35] <MongoNewb> db.game.find({$or:[{"away_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c"},{"home_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c"}],{"season_schedule__id":"5ddc217a-a958-4616-9bdf-e081022c440b"}}).count()
[04:57:43] <Boomtime> have you checked with the 2 equivalent finds?
[04:57:48] <Boomtime> db.game.find({"away_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c","season_schedule__id":"5ddc217a-a958-4616-9bdf-e081022c440b"}).count()
[04:57:48] <Boomtime> db.game.find({"home_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c","season_schedule__id":"5ddc217a-a958-4616-9bdf-e081022c440b"}).count()
[04:58:25] <MongoNewb> 41 and 41
[04:58:26] <MongoNewb> they each return
[05:01:00] <joannac> but the full query returns 0?
[05:01:41] <MongoNewb> db.game.find({$or:[{"away_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c"},{"home_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c"}],{"season_schedule__id":"5ddc217a-a958-4616-9bdf-e081022c440b"}}).count()
[05:01:43] <MongoNewb> 2014-08-17T21:58:52.774-0700 SyntaxError: Unexpected token {
[05:01:59] <Boomtime> dun dun duuuuuuun
[05:02:03] <joannac> ...that's not zero, that's a syntax error
[05:02:23] <MongoNewb> yea sorry I said above sorry I am getting unexpected token now
[05:02:28] <MongoNewb> sorry for confusion
[05:05:37] <MongoNewb> I can't figure out where the expression is wrong though, and all the sub-expressions seem to work fine, so I guess there is some nuance for combining them?
[05:05:39] <Boomtime> anyway, last component pair should be a member, not an object
[05:05:45] <Boomtime> db.game.find({$or:[{"away_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c"},{"home_team":"583ecfa8-fb46-11e1-82cb-f4ce4684ea4c"}],"season_schedule__id":"5ddc217a-a958-4616-9bdf-e081022c440b"}).count()
[05:06:11] <Boomtime> ok, if you ever need to test this, remember the expression must be a JSON object
[05:06:25] <Boomtime> you can construct it piecewise at the shell prompt
[05:08:07] <MongoNewb> ahhh I see where it was wrong, thanks very much for the help
[05:52:49] <MongoNewb> eek, now some trouble transferring the mongo query to python, seems $OR can't be used in PyMongo and no good stack overflow resutls come up
[06:40:43] <RaviTezu> Hi, I have a replica node in RECOVERING state from last 15 hours. Is there a way to check how long it may take to become SECONDARY?
[06:41:08] <RaviTezu> and how much data/oplog yet to be sync'ed/applied?
[06:41:49] <RaviTezu> Any help would be appreciated :)
[06:44:16] <joannac> RaviTezu: how big is the instance?
[06:44:29] <joannac> look in the logs of the syncing member?
[06:44:39] <Zelest> morning
[06:44:45] <RaviTezu> joannac: It have 5 dbs with around 10GB of date
[06:44:58] <joannac> um, that's not a lot
[06:45:03] <joannac> it shouldn't take 15 hours
[06:45:10] <joannac> My guess is it fell off the oplog
[06:45:41] <Zelest> when that happens, I usually kill the secondary mongod, rm -rf all the files in the datadir and start it up again..
[06:45:58] <Zelest> that start a fresh sync from scratch which usually is a lot faster
[06:46:07] <RaviTezu> joannac: So, I should just remove the dbs and do a fresh copy and start the db?
[06:46:25] <joannac> RaviTezu: well, confirm that is actually what happened first
[06:46:49] <RaviTezu> joannac: This secondary was down for 2 weeks due to some hardware issue.
[06:46:58] <joannac> oh, then for sure
[06:47:06] <joannac> wipe it, sync from scratch
[06:47:45] <joannac> do you understand why it can't catch up?
[06:48:37] <RaviTezu> joannac: Nope. every node give a oplog of 2GB. and I don't think it is rotated completely.
[06:48:48] <RaviTezu> node was given**
[06:50:30] <RaviTezu> joannac: Log has [rsBackgroundSync] replSet not trying to sync from mongoshard4.com:27017, it is vetoed for 317 more seconds messages
[06:50:40] <RaviTezu> Digging in more.
[06:51:27] <joannac> RaviTezu: and you don't go through 2gb of oplog in 2+ weeks?
[06:51:58] <RaviTezu> joannac: Nope. we aren't writing that much to the dbs.
[06:52:08] <joannac> db.printReplicationInfo() on one of the other shards
[06:52:14] <joannac> and pastebin the output?
[06:52:19] <RaviTezu> sure
[06:52:27] <Zelest> joannac, even if that was the case, won't it do a initial sync if the oplog isn't enough? :o
[06:52:38] <joannac> Zelest: um, no?
[06:53:15] <Zelest> oh, I thought it did.. like, if the oplog was too small, it would copy the whole database rather than "syncing up" ..
[06:54:21] <joannac> no, it requires intervention
[06:55:02] <joannac> initial syncs put load on the source; it would be pretty unfriendlyif it happened outside of your control
[06:55:16] <Zelest> ah, yeah, makes sense..
[06:58:16] <RaviTezu> joannac: http://bpaste.net/show/DL7aJdHCF2GQvNtTwiE3/
[06:58:49] <joannac> RaviTezu: yeah, 97.1 hrs is < 4 days
[06:58:52] <joannac> not 2 weeks
[06:58:52] <RaviTezu> So, I think it is rotated :(
[06:58:55] <RaviTezu> Yep
[06:59:46] <RaviTezu> joannac: Do you think the node has fell off the oplog?
[06:59:55] <joannac> for sure
[07:00:02] <joannac> grep in the logs for RS102
[07:00:56] <RaviTezu> joannac: got it. It was unable to catch up :(\
[07:01:31] <RaviTezu> and joannac: one more help? How can you say that from the db.printReplicationInfo() ?
[07:01:41] <joannac> say what?
[07:02:00] <RaviTezu> the recovering node has fell off the oplog?
[07:02:07] <joannac> log length start to end: 349576secs (97.1hrs)
[07:02:13] <joannac> that's how long your oplog spans
[07:02:19] <joannac> < RaviTezu> joannac: This secondary was down for 2 weeks due to some hardware issue.
[07:02:45] <joannac> if you take a node down for 2 weeks, it's well off the oplog (which is <4 days)
[07:02:47] <RaviTezu> joannac: But the recovering node has 167.37hrs of oplog?
[07:02:54] <joannac> doesn't matter
[07:03:07] <joannac> think of it this way
[07:03:23] <RaviTezu> so Can i say oplog is rotated every 4 days?
[07:03:24] <joannac> the recovering node is up to Aug 1, 2014
[07:03:41] <Zelest> ugh, switching oplog size seems a bit tricky :/
[07:03:44] <RaviTezu> joannac: so Can i say oplog is rotated every 4 days?
[07:03:46] <joannac> but the source node has operations from Aug 15 - Aug 18
[07:04:01] <joannac> it doesn't matter whether your recovering node has a 1gb or 1tb oplog
[07:04:02] <RaviTezu> joannac: Oh. got it :) Thanks for your help
[07:04:20] <joannac> what matters is that the source node is big enough for the recovering node to get all the ops in the meantime
[07:04:31] <joannac> source node's oplog*
[07:04:51] <RaviTezu> and one more question.. Do you think just cleaning the dbs and startup the instance works? or do you prefer rsycing the data manually and start the db?
[07:05:16] <joannac> clean the dbpath, start the instance, let it initial sync
[07:05:21] <kali> RaviTezu: rm everything and restart works great
[07:05:33] <RaviTezu> kali: thanks.
[07:05:34] <kali> RaviTezu: file syncing is hard to do right
[07:05:37] <RaviTezu> joannac: thanks
[07:06:17] <joannac> Zelest: erm, not really. it requires 1 brief downtime
[07:06:34] <Zelest> yeah, but on 3 nodes.. and a prod environment :)
[07:06:46] <joannac> do it during a maintenance window
[07:20:00] <MongoNewb> if anybody is here this should be a simple question but I'm tired and don't see how to manuever JSON I guess (my first time using mongo/JSON)
[07:20:43] <MongoNewb> a .find() returns me a cursor object, I now want to search the thing that it returned, i.e. say I search all of my games for a certain thing, and now I get the cursor back
[07:21:03] <MongoNewb> I want to find a field within those games, i.e. gametime or something of the sort
[07:21:17] <MongoNewb> how do I get this from the result of the .find?
[07:29:30] <RaviTezu> and joannac: one more question.. is there a way to check the exact db size? I think mongodb pre-allocates the files..and so total file sizes != db size ?
[07:31:55] <joannac> db.stats()
[07:32:21] <joannac> MongoNewb: why don't you just project the field you want?
[07:36:40] <rspijker> MongoNewb: you can iterate over the returned documents. if you have the cursor that was returned, you can use while(cursor.hasNext()){var doc = cursor.next(); do something with doc}
[07:36:46] <rspijker> what is it you want to do exactly?
[07:39:01] <RaviTezu> joannac: Thanks again.
[07:39:17] <MongoNewb> joannac: I guess I'm not sure what project means, rspijker I will try npw
[07:41:34] <ajph> hey. is it reasonable to run with a high % write lock on a primary (>%130) for periods of hours? even rate-limiting my updates i can't get this figure down - although there is no write queue. all user-facing operations are reads and are done from the secondary which maintains a very low lock %
[07:43:41] <MongoNewb> while(test.hasNext()):
[07:43:42] <MongoNewb> AttributeError: 'Cursor' object has no attribute 'hasNext'
[07:55:55] <rspijker> MongoNewb: is this through some driver?
[08:08:59] <MongoNewb> PyMongo yea
[08:09:41] <MongoNewb> another question, if I am sure a .find() returned only 1 result is there a more efficient way of access that than iteration (for c in results, etc.)
[08:30:13] <dorongutman> hey guys
[08:30:26] <dorongutman> how do handle indexes on a soft-deleteable collection ?
[08:31:08] <dorongutman> since I only have the deletedAt field on documents that were “deleted”, how do I index for better performance on queries that get all the non-deleted documents ?
[08:31:18] <dorongutman> it’s like a reverse sparse index
[11:30:11] <remonvv> \o
[13:52:17] <OttlikG> Hi
[13:52:27] <Zelest> heya
[13:53:22] <OttlikG> I have a little problem. How can I convert a UNIX timestamp from an ISODate?
[13:53:57] <OttlikG> or a numeric value to make some operation
[13:54:45] <saml> {$date: timestamp}
[13:54:59] <saml> OttlikG, which driver are you using?
[13:55:01] <saml> in mongoshell?
[13:55:36] <OttlikG> I use the dev shell
[13:56:39] <Zelest> date = new Date(unixtime * 1000);
[13:56:43] <OttlikG> I teste my aggregation in robomongo 0.8.4
[13:57:28] <r1pp3rj4ck> Zelest, we need a stored date to be converted to timestamp in an aggregation
[13:57:30] <saml> ISODate().valueOf()
[13:58:41] <Zelest> but.. why not store it as a unixtime if that's what you plan on using?
[13:59:10] <r1pp3rj4ck> this is only one place where we use it
[13:59:27] <r1pp3rj4ck> also, logback stores it this way by default and it's fine for every other use
[14:00:48] <r1pp3rj4ck> saml, it works fine, but how can i put it my aggregation?
[14:01:04] <saml> r1pp3rj4ck, what are you trying to do?
[14:01:45] <r1pp3rj4ck> i tried "$project": { "time": "$timestamp.valueOf()" } and it returns null as it doesn't a var named timestamp.valueOf()
[14:02:07] <r1pp3rj4ck> saml, something like this http://stackoverflow.com/a/21329349/898545
[14:02:27] <saml> just use mapreduce?
[14:02:47] <r1pp3rj4ck> don't aggregation support stuff like this?
[14:02:57] <saml> it's a long question
[14:03:08] <saml> didn't comprehend. but mapreduce is easier to use
[14:03:21] <r1pp3rj4ck> sure, thanks
[14:04:28] <Shakyj> hey, I have structure that looks like http://pastie.org/9483089 how would I create a map of anchor_text? so I have the anchor text and the number of occurences? Is it possible in mongodb?
[14:04:56] <saml> yes, you can do with aggregation Shakyj
[14:07:46] <Shakyj> saml: cheers, looking at the docs now
[14:09:38] <saml> db.docs.aggregate({$group:{_id:'$url', n:{$sum:1}}}) Shakyj
[14:17:55] <Shakyj> thanks saml
[14:40:39] <komuta> hi
[14:41:07] <komuta> I have a question regarding user management with pymongo
[14:52:17] <komuta> bye
[16:24:03] <meadhikari> is there a web service where i can give my server details and view my collections and documents and perform operations in them?
[16:26:26] <EmmEight> I have a mongodb cluster, and I am connecting to it from another server. When running parallel connection streams on the same server (30+), the cluster will time out intermittently. Also the CPU load on the primary member is outrageously high. Any ideas? Do I just need to upgrade our primary server
[17:42:34] <Tinuviels> Hi there
[17:43:03] <tscanausa> Hello
[17:43:17] <Tinuviels> I have a problem with query consturct
[17:43:44] <Tinuviels> very simillar to this one: http://stackoverflow.com/questions/7811163/in-mongodb-how-do-i-find-documents-where-array-size-is-greater-than-1/15224544#15224544
[17:44:01] <Tinuviels> but instead of array I have subarray.
[17:45:03] <Tinuviels> so my final query should looks like this: db.accommodations.find( { $where: "this.main.name.length > 1" } );
[17:45:15] <Tinuviels> name is under main
[17:45:20] <Tinuviels> but this won't work
[17:45:35] <Tinuviels> and I'm stuck right now
[17:46:35] <Tinuviels> $size can't work with $gt or $lt
[17:46:48] <Tinuviels> #where can't with subarray
[17:46:56] <Tinuviels> * $where
[17:48:22] <Tinuviels> this is working: db.accommodations.find({'main.name.1': {$exists: true}}) but I need in fact all documents with less then 10 elements, so I can't use this one.
[17:49:14] <tscanausa> It might just not be possible
[17:50:04] <Tinuviels> seriously? That makes me sad...
[17:50:09] <Tinuviels> any alternative?
[17:50:22] <EmmEight> get everything you can that is close, and loop over them?
[17:50:33] <Derick> Tinuviels: so store a count in your documents
[17:50:50] <Tinuviels> Derick yep, this is the option
[17:50:56] <Tinuviels> but DB is to big for that
[17:51:07] <Tinuviels> and what I'm doing now is one off.
[17:52:25] <Tinuviels> EmmEight I can throw away from python level documents with array bigger then 10, I just thought that putting this into query will be much faster
[17:53:02] <tscanausa> if it were possible, it would be but the data model does not support it
[17:53:50] <Derick> mongodb would simply have to read all the documents too...
[17:54:00] <Derick> so it's not really that different where yo ufilter it
[17:54:15] <Tinuviels> you've got the point
[17:54:28] <tscanausa> Derick: not entirely true, network is not free
[17:54:41] <Derick> virtually free :-)
[17:55:29] <Tinuviels> hmm, how about this: db.accommodations.find({'main.name.11': {$exists: false}})
[17:55:37] <Derick> you could do it with map reduce
[17:55:43] <Derick> and that might actually also work Tinuviels
[17:55:52] <Tinuviels> isn't this == to 1-10?
[17:55:57] <Derick> but : $exists: flase can'y use an index either
[17:56:08] <Derick> but : $exists: false can't use an index either
[17:56:23] <Tinuviels> ha, so no perfect sollution
[17:56:27] <Tinuviels> Thanks guys
[17:56:34] <Tinuviels> I will do small banchmark
[17:56:48] <tscanausa> Map reduce would defiantly allow it
[17:56:59] <Tinuviels> one with db.accommodations.find({'main. name.11': {$exists: false}}) and one without this condition in querry but with python "filter"
[18:16:48] <saml> https://gist.github.com/saml/f82a7430e3004c3fba36 how can I check distinct value of array.prop ?
[18:17:03] <saml> only way is mapreduce?
[18:17:39] <kali> saml: aggregation framework should work, and way faster
[18:18:08] <saml> hrm do you have an example of how to do this?
[18:18:14] <saml> problem is _children is an array
[18:18:37] <kali> you'll need a $unwind, then a $match and then a $group
[18:19:03] <saml> db.docs.distinct('_children.type', {'_children._name': 'image'}) I want this.. but _children.type should be of that sub document in an array. not of entire array
[18:19:09] <saml> ah let me try thanks
[18:23:02] <wayne> any idea why mongodb takes more than 30 GB to compile?
[18:23:10] <wayne> (disk)
[18:23:54] <saml> db.docs.aggregate({$unwind:'$_children'}, {$match: {'_children._name': 'image'}}, {$group: {_id: '$_children.type'}}) this worked
[18:34:35] <saml> so each doc is a tree: https://gist.github.com/saml/bbc5e39e28e769f9cb43 depth is arbitrary. how can I $unwind all children?
[18:36:17] <kali> saml: you'll need map reduce there. arbitrary depth is generally a bad idea
[18:36:36] <saml> yah i just dumped a hierarhical database (JCR) into mongo for analysis
[18:37:10] <kali> not sure mongodb is the right tool for that :)
[18:37:27] <saml> would you use SQL?
[18:37:34] <kali> ho god, no
[18:37:50] <saml> what's a good tool to analyze trees?
[18:37:55] <kali> code :)
[18:38:07] <saml> https://gist.github.com/saml/a9a5391bf926c8afe19c i tried to dump to SQL but gave up on schema
[18:38:35] <kali> how big is it ?
[18:38:52] <saml> about 500k mongodb docs
[18:39:11] <kali> and in bytes ?
[18:39:21] <saml> 2.5GB in mongodump json
[18:40:00] <saml> thing is this JCR databse is really slow to access.. took over two hours to dump
[18:40:26] <saml> then running mongodb queries on the dump were fast enough
[18:40:34] <saml> eventually i need to migrate JCR to mongodb
[18:40:52] <saml> just wanted to set "schema" in the mongodb application
[18:41:05] <mango_> hello
[18:41:05] <saml> or wanted to make sense out of tree structured documents
[18:41:33] <mango_> I have a mongo server with data I want to turn into an arbiter
[18:41:43] <mango_> what is the best way to delete the date?
[18:41:44] <mango_> data
[18:41:57] <mango_> the server is already part of a replica set.
[18:42:14] <saml> db.dropDatabase() ? or db.collectionName.remove({}) ?
[18:42:17] <kali> saml: i'm not sure it is a good idea. mongodb will not be able to query efficiently deeply nested structure. you would have to flatten the structure entirely, making other queries awkward
[18:42:24] <Derick> you need to remove it from the replicaset first
[18:43:07] <mango_> ok, so I'll remove from replica set, and then drop the databases
[18:43:09] <mango_> ok.
[18:43:49] <saml> so this is CMS. there are components like image, video... etc
[18:44:08] <saml> for each document (web page) there can be one or more are where editors are free to drag drop any available component into
[18:44:11] <stefandxm> iam not sure if you can something with the aggreation framework
[18:45:00] <saml> so, I was thinking each mongodb doc will be like {title: "foobar", body: [{ ... }, {component 2 }, ...] }
[18:45:11] <saml> where body is an array of dragg-dropped components
[18:45:22] <saml> so it'll be single level
[18:46:15] <saml> wait i should actually get the maximum numbe of depth of trees
[18:51:18] <remembertoforget> hello there, had a question about adding a new collection to an existing mongo cluster.
[18:51:45] <remembertoforget> I am seeing that the new collection, even though says "sharded=true", is on only one of the shards.
[18:52:45] <remembertoforget> is this as expected? or should i be seeing other shards as well - when I do the db.[collectionname].stats() command
[18:53:21] <kali> remembertoforget: how big is your collection ?
[18:53:44] <remembertoforget> Its a new collection and there are no records in that yet
[18:54:11] <saml> db.pages.find({'_children._children._children._children._children._children._children._children._children._children':{$exists:1}}).count()
[18:54:25] <kali> remembertoforget: stuff will happen when the collection is big enough (64MB iirc)
[18:54:32] <saml> hahah so get 4 docs with level 10 depth
[18:55:20] <remembertoforget> kali: alright - thanks. The reason I ask is - i remember before when i just had 2 shards in the cluster and I created new collection, I used to see both the shards in the stats
[18:55:45] <kali> saml: you should seriously question mongodb ability to help you in the problem before commiting for a database replacement :)
[18:55:47] <remembertoforget> now, I added another shard to the cluster. I dropped the old collection and recreated it.
[18:56:03] <saml> they alerady started to rewrite CMS on top of mongodb
[18:56:09] <saml> i was given a task to migrate data
[18:56:56] <kali> ho man
[18:57:08] <kali> we'll get one more of this "mongodb sucks" blog post
[18:57:12] <mango_> once you've removed a node, how do remove the replset config from node, is there way without restarting?
[18:57:19] <saml> hahah
[18:59:35] <mango_> guess not, restart did the trick, now setting up the arbiter to connect to remaining 2 nodes.
[19:03:18] <melvinram> got a complex query I'm trying to build
[19:04:07] <melvinram> https://gist.github.com/melvinram/8899426c75e2be3b4c04
[19:04:15] <melvinram> It's giving me the following error:
[19:04:41] <melvinram> exception: A pipeline stage specification object must contain exactly one field.
[19:05:15] <melvinram> I've looked around on stackoverflow and google and haven't come across something relevant yet.
[19:06:00] <kali> melvinram: that's mongoose, right ?
[19:06:08] <melvinram> mongoid
[19:06:18] <kali> melvinram: ho ! you have one more {}
[19:06:25] <kali> i mean one too many
[19:06:44] <kali> the one immediately inside the []
[19:07:13] <kali> (ho yeah, ruby =>, definitely not mongoose :) )
[19:08:23] <mango_> how do you turn off journal on the command line?
[19:08:39] <mango_> e.g. --journal=false?
[19:08:56] <melvinram> kali, I'm not seeing it.
[19:09:37] <kali> melvinram: see my comment
[19:15:54] <melvinram> kali, got it
[19:15:55] <melvinram> thanks
[19:20:48] <kali> aw, i gave him a half fix
[19:20:53] <kali> anyway, he's gone
[20:54:27] <melvinrram> https://gist.github.com/melvinram/899cbd04af52ced4c9e4 I have a aggregate query that runs on 100k records. It runs fast. If I had a sort into the mix, it becomes painfully slow and doesn't return anything.
[20:55:07] <melvinrram> There is an index on created_at
[20:55:45] <melvinrram> Any ideas on what I might be doing wonky?
[20:55:56] <r1pp3rj4ck> not an expert or anything, but why is this an aggregate?
[20:56:44] <melvinrram> The next step for me is to group them
[20:56:53] <r1pp3rj4ck> oh all right
[20:57:10] <r1pp3rj4ck> in this case, as i said, i'm not an expert and really have no idea :/
[21:07:13] <prologic> Hi. Anyone shed any light on my problem described here: https://gist.github.com/therealprologic/c85697889f0f834180ce ?
[21:10:15] <melvinrram> prologic - took a look but I don't have any answer for you
[21:19:42] <prologic> nps :) I hope someone does/will
[21:19:49] <prologic> otherwise I"ll have to post it on stackoverflow :)
[21:42:49] <ngoyal> anybody ever store more than 1 trillion documents in a collection?
[23:36:42] <ZogDog> Looking for assistance with schema instance methods using Mongoose. This (http://jsfiddle.net/7g1sLehr/) isn't producing the onhand value I'm looking for. Any pointers?