PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 31st of March, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:28:34] <exwx11_ax> Hello mongodb gurus, I got high I/O on servers running mongo configsvr version 2.4.9. Want to see if u guys have any thoughts?
[07:08:12] <Soothsayer> I want to keep a track of a customer's session on my mobile commerce site. Within my customer session document, I am planning to create a field called 'loc' which stores an array of sub-documents of the structure - { $lng, $lat, $time }
[07:08:50] <Soothsayer> And ever 3 hours, I'll update his location by adding to the 'loc' array using push.
[08:23:42] <bbooth> Hi! Q: What is the best way to deploy a 2 zone (geographically separate) redundant and shared GridFS cluster?
[08:27:23] <bbooth> ^That's sharded btw
[08:39:43] <Nodex> holy cow http://objectrocket.com/pricing_lon - that's expensive
[08:40:41] <Soothsayer> I want to keep a track of a customer's session on my mobile commerce site. Within my customer session document, I am planning to create a field called 'loc' which stores an array of sub-documents of the structure - { $lng, $lat, $time } .. And ever 3 hours, I'll update his location by adding to the 'loc' array using push.
[08:40:46] <Soothsayer> Does this seem a valid strategy?
[08:41:10] <Soothsayer> Nodex: almost all cloud based mongo hosting is expensive
[08:46:27] <Nodex> your "loc" object will grow a lot, you have a 16mb limit on your document
[08:47:06] <Soothsayer> Nodex: yes, but that's a lot of locations
[08:47:35] <Soothsayer> Nodex: what would you suggest instead?
[08:48:08] <Soothsayer> Nodex: a separate collection?
[08:49:30] <Soothsayer> Nodex: I can keep a limit of 1 ping per hour or something
[08:49:33] <Soothsayer> should solve my problem
[08:49:43] <Nodex> :)
[08:56:20] <bbooth> *sigh*
[08:56:43] <Soothsayer> If I create a 2 index on a location field? and if the location field has additional data (like say time, description, etc).. the index will only take the first two associative values from the field right?
[09:02:10] <Nodex> I think it can only be lat/long data, not externam field
[09:02:14] <Nodex> external *
[10:41:08] <ilhami> Hey
[10:41:10] <ilhami> anybody here?
[10:41:31] <ilhami> how can I query values with the cursor and save them? when I gry to get an int it gives me a nullpointerexception
[10:41:37] <ilhami> save them in variables*
[10:41:42] <ilhami> try*
[10:45:47] <Nodex> eh?
[10:52:09] <ilhami> Do I have to use DBObject?
[10:52:23] <ilhami> I want to store the cursor values in my variables?
[11:12:25] <ilhami> Hey
[11:12:30] <ilhami> how do I query a int
[11:12:31] <ilhami> ?
[11:12:36] <ilhami> I get a nullpointer all the time
[11:12:54] <ilhami> when I want to query an int from the document and store it in a var?
[11:15:43] <kali> ilhami: show us your code
[11:15:54] <kali> not here, use a pastebin
[11:16:47] <ilhami> kali stay here.
[11:18:26] <ilhami> http://pastebin.com/geqpziiV --- check it out
[11:18:33] <ilhami> the int gives a nullpointerexception
[11:18:41] <ilhami> kali
[11:19:29] <kali> ilhami: you're reading fields of "query". query is just that: the query.
[11:20:12] <kali> the result is in the cursor, you need to write something like "DBObject doc = cursor.next()" in the "while" loop
[11:20:40] <kali> ilhami: and then call all the getSomething against this doc, not the query
[11:21:30] <ilhami> ok thanks I am trying now
[11:50:08] <ilhami> hey
[11:50:23] <ilhami> my json output is giving null in all fields
[11:50:24] <ilhami> why?
[11:51:02] <Nodex> 42
[12:37:04] <MmikePoso> Hi, lads. What replset option I can use to configure different 'weight' for the replset members?
[12:41:16] <adamcom> priority: http://docs.mongodb.org/manual/reference/replica-configuration/#local.system.replset.members[n].priority
[12:41:57] <adamcom> it's all relative, priority of 1.5 beats 1 just the same as 10 beats 1
[12:42:29] <adamcom> all it does is weight them in terms of what will be chosen primary though - if you are looking to prioritise operations it doesn't do that
[12:45:24] <grafjoo_> hi guys
[12:46:04] <grafjoo_> what's the current status of signed rpms?
[12:47:36] <kesroesweyth> I read through the O'Reilly book "MongoDB and PHP" and now I'm creating a web application and am pretty confused about how to update my documents in Mongo. I am storing an object created in Javascript on the front end using a PHP model that simply puts the entire thing into Mongo, overwriting a document if it contains the same site_id value. This is clearly wrong, but I am kind of stuck.
[12:48:12] <kesroesweyth> Basically, what is the proper way to handle updates where partial objects need to be inserted into existing documents, or is using nested objects just plain wrong in Mongo?
[12:48:46] <ehershey> grafjoo_: You can watch/vote/follow progress on it here: https://jira.mongodb.org/browse/SERVER-8770
[12:50:56] <grafjoo_> ehershey: saw that allready - i'm confused because in SERVER-5455 you wrote that "Published archives are signed"
[12:51:03] <grafjoo_> :-)
[12:52:51] <grafjoo_> but i guess archives don't include packages
[12:52:53] <grafjoo_> ?
[12:53:11] <ehershey> yeah, archives are zips and tarballs
[12:53:19] <ehershey> packages are rpms and debs
[12:53:40] <ehershey> almost everything but rpms are signed!
[12:54:41] <grafjoo_> ehershey: thx for your info!
[12:55:48] <ehershey> sorry not to have more
[12:59:22] <Nodex> kesroesweyth : what are you stuck with ?
[13:00:28] <kesroesweyth> Nodex: I am stuck with coming up with a proper data structure to store in Mongo, particular one that allows for easy updating.
[13:00:52] <kesroesweyth> Nodex: What I am storing is configuration elements for dynamic web sites, like site ID, contact info, social media URLs, etc.
[13:01:41] <Nodex> right ok but what's the problem with the structure?
[13:01:45] <kesroesweyth> Currently, when the configuration page loads, it gets the whole config document for the corresponding site_id from Mongo, the object is managed in Javascript, and the entire thing is pushed back into Mongo when the configuration is saved.
[13:01:51] <kesroesweyth> Well..
[13:02:32] <kesroesweyth> I'm managing the entire config at once like that because it contains nested objects (subdocuments) and I can't seem to find a way to update just single values within those subdocuments without overwriting the entire subdocument to contain only that new value.
[13:02:53] <kesroesweyth> Unless I use dot notation, which I then have to find a way to dynamically generate in my Mongo queries.
[13:03:02] <Nodex> so is each "site" a document with each "page" being a nested document?
[13:03:14] <kesroesweyth> Close enough, yes.
[13:03:26] <Nodex> $set will update parts of a document
[13:04:29] <kesroesweyth> It's more like { site_id: "123", name: "My Site", social_media: { facebook: { url: "http://...", enabled: true} } }
[13:04:44] <kesroesweyth> Hopefully that is clear.
[13:05:07] <Nodex> I store site information in a "settings" collection with that kind of thing
[13:05:14] <Nodex> pages have their own document in their own collection
[13:05:22] <kesroesweyth> Yeah, these are site-wide values.
[13:06:09] <kesroesweyth> But if I want to disable facebook, my query has to say {"social_media.facebook.enabled": false}
[13:06:12] <Nodex> either way you can update parts like this... db.foo.update({site_id:123},{$set:{"social_media.facebook.enabled":false}});
[13:06:25] <kesroesweyth> So now I need a query builder that can create those dynamic dot notation references.
[13:06:36] <kesroesweyth> I just assume I'm doing it way wrong.
[13:06:39] <Nodex> with php it;s easy just use an array
[13:06:47] <Nodex> or use dot notation in an array
[13:07:19] <kesroesweyth> Yeah, I can do that, but how do I make it dynamic so I don't need a seperate hard-coded query for every single nested property I could possibly want to update?
[13:07:36] <kesroesweyth> I don't want to hard-code the dot notation. It should be generated dynamically based on what I pass it, right?
[13:07:44] <Nodex> I don't follow what that means
[13:07:55] <kesroesweyth> And to that end, shouldn't I not have to pass the entire object back to the query to update, but rather just the changes?
[13:08:08] <kesroesweyth> Hm. Sorry. Let me try again.
[13:08:12] <Nodex> no, you update parts of it with "$set"
[13:08:36] <kesroesweyth> Right. {$set: {"social_media.facebook.enabled": false }}
[13:08:49] <kesroesweyth> But let's look at the dot notation there.
[13:09:07] <Nodex> what about it?
[13:09:23] <Nodex> are you asking for a lazy way to create that dot notation?
[13:09:39] <kesroesweyth> That string, in quotes... I don't want to have to type that out *in the code*. Shouldn't it be built dynamically based on what I pass to the method handling the query?
[13:09:47] <kesroesweyth> Perhaps. Is that lazy? Seems smarter and more efficient to me.
[13:09:54] <kesroesweyth> Just.. too smart for me. :P
[13:09:55] <Nodex> not smarter no
[13:10:00] <kesroesweyth> Hm.
[13:10:08] <kesroesweyth> So you would have a separate query just for that?
[13:10:12] <Nodex> that assumes you -allways- trust your input
[13:10:20] <kesroesweyth> True.
[13:10:27] <kesroesweyth> Hm.
[13:10:36] <Nodex> I don't just update one key at a time. If I have to then yes I write a separate query
[13:10:54] <kesroesweyth> Okay, that's important! How can I not have to?
[13:11:10] <kesroesweyth> Should I be doing it like I am? Passing around the entire config all the time and updating it wholly each time?
[13:11:11] <Nodex> not possible unless you trust your input 100% of the time
[13:11:28] <Nodex> you should do it how you want to do it, however it suits your app to do it
[13:12:07] <kesroesweyth> Okay let me explain one of my thoughts that might just be flat out wrong. Your response will help me a lot. Give me a sec to type.
[13:12:08] <Nodex> the closest I do to dynamic queries is I wrote a mapper to map $_POST names and values to database values
[13:12:32] <Nodex> which just takes a simple json object and loops it for a particular method
[13:14:14] <kesroesweyth> Let's say I have the layout I described before, but where there was facebook with its own subdocument containing "url" and "enabled", there is also flickr, twitter, googleplus, etc. beside it. If I want to only update Facebook, and pass it an entirely new subdocument, without dotted notation in the query ("social_media.facebook"), there is no way to do this that won't clobber the other social media subdocuments. Is that correct?
[13:14:20] <kesroesweyth> Hope that isn't confusing.
[13:14:34] <Nodex> yes with $set
[13:14:56] <Nodex> $set will ONLY effect what you're updating and no other keys / sub documents
[13:15:58] <Nodex> you have two options. 1. Pass it the entire document each time and update it. 2. Pass in $set with specific keys and update part of the document
[13:18:58] <kesroesweyth> Wait. $set will only effect what I'm updating *if* I tell it exactly what to update using dotted notation. Not if I pass it part of an object. For example: {$set {"social_media.facebook": {"enabled": false}}} < This will completely remove the "url" key, yes?
[13:19:36] <Nodex> NO
[13:19:38] <Nodex> lol
[13:20:03] <Nodex> that's what "only update the keys you give it" means!
[13:20:08] <kesroesweyth> My testing suggests otherwise. Maybe I am doing it wrong.
[13:20:19] <kesroesweyth> Seems if I give it an object, it completely replaces the existing object.
[13:20:31] <Nodex> use the array in php
[13:21:11] <Nodex> $criteria=array('$set'=>array('social_media'=>array('facebook'=>array('enabled':flse))));
[13:21:13] <Nodex> $criteria=array('$set'=>array('social_media'=>array('facebook'=>array('enabled':false))));
[13:21:25] <kesroesweyth> I'm testing from the command line right now. {$set {"social_media.facebook.enabled": false}} < This works fine. But.. {$set {"social_media.facebook": {"enabled": false}}} < This clobbers the "url" key entirely.
[13:21:46] <Nodex> I don't see why you can't take the "enabled" key and add it to the first array?
[13:22:12] <kesroesweyth> Because there might be a lot of other things in the object and I don't want to specify them all independently if I don't have to.
[13:22:43] <Nodex> my suggestion is write a mapper that maps your data to keys
[13:22:56] <Nodex> that's about as dynamic as you're going to get
[13:23:23] <kesroesweyth> Okay, that is helpful. I have to step into a meeting to talk about this now lol
[13:23:28] <kesroesweyth> Thanks for your help!
[13:23:34] <Nodex> no probs
[13:28:22] <ragezor> hello guys, i'm trying to implement a counter system in mongodb and i need advice about implementation
[13:28:48] <ragezor> i have a collection counters and in there a lot of documents which represent one counter each
[13:29:26] <Nodex> so far so good
[13:29:27] <ragezor> counters have field number which then i increment each time an event occurs
[13:29:42] <ragezor> my question is
[13:30:01] <ragezor> if many different processes are incrementing the counter
[13:30:39] <kali> if you're using $inc:1, it will work as you want it to work
[13:30:43] <ragezor> can it happen that counter breaks, that is some increments are not registered?
[13:31:39] <ragezor> so $inc is atomic operation?
[13:31:42] <kali> yes
[13:32:10] <ragezor> ok, cool
[13:32:43] <kali> it used to be quite obvious in the documentation, but it's no longer the case
[13:34:37] <Leech> I need to do a map reduce with results into a table that is heavily used for searches. What's the best way to do that without affecting the collection usage?
[13:34:41] <cheeser> all updates are atomic from one document to the next
[14:11:21] <ernetas> Hey guys.
[14:11:40] <ernetas> What's the difference between arbiters and local.system.replset.members[n].votes?
[14:13:44] <Derick> arbiters don't contain data
[14:14:07] <ernetas> So?
[14:14:22] <Derick> well, that's a difference
[14:14:44] <ernetas> But with local.system.replset.members[n].votes you don't need additional nodes to offset the even numbers at all, don't you?
[14:15:08] <Derick> true, but messing with votes is not really recommend as doing so tends to have reprecussions that you didn't think off
[14:15:31] <ernetas> What kind of reprecussions? (I'm curious, because I'm completely new to MongoDB)
[14:15:46] <Derick> selection and elections is complicated.
[14:15:58] <Derick> if you're now, please follow the suggestions and add an arbiter instead of changing votes
[14:16:21] <ernetas> I'll test both...
[14:16:37] <ernetas> I don't want additional 80 MB memory overhead just because of arbiters..
[14:17:14] <Derick> every node needs to run on a different machine...
[14:17:32] <ernetas> So? They do.
[14:18:19] <ernetas> Well... Okay, I'm running arbiters on another non-related MongoDB machine. But this would mean that overhead is even larger if I'd follow this rule of thumb explicitly.
[14:28:32] <adamcom> note: from 2.6 onward you will not longer be able to set votes > 1, it will be 0 or 1 only. There are also other reasons not to mess with votes, but generally, just don't do it unless you are going beyond 7 members
[14:29:57] <adamcom> if you run an arbiter, and you do it with nojournal=true, noprealloc=true, smallfiles=true, oplogsize=1 you can minimize the footprint significantly
[14:30:56] <ernetas> adamcom: thanks. What are other reasons? I was thinking of using it just for setting that backup server, which is often on high load and can be unresponsive to outside, would never participate in elections/voting (vote=0).
[14:32:03] <ernetas> Because right now I have one replica set with 4 nodes, one of which is that backup server. I would need 3 arbiters to set off the backup server and keep the significance of each server intact.
[14:32:45] <ernetas> I'll try those options, also, as, after some thinking, it seems like a good idea not to do what devs of a product don't want me to do... :D
[14:38:27] <adamcom> has to do with write concerns - particularly calculating w:majority - having a 0 vote for backups should be OK, but consider if you are using w:majority (which is the default for chunk migrations btw) with a 4 node replica set - you need 3 data bearing nodes to acknolwedge the write for it to succeed, so if one of them is down and your backup node is slow or under high load, then you can have some pretty poor performance
[14:38:50] <adamcom> if you are not using sharding, not using anything higher than w:1 for a write concern, then you should be fine
[14:39:15] <adamcom> and we are working on fixing those majority calculation issues too, of course, so it will get better in future versions
[14:59:01] <davonrails> Hi guys. I know this question is probably asked everyday, but is there a way to make queries with join ? (queries on associated table fields)
[14:59:42] <cheeser> not with mongodb. you need two queries
[14:59:59] <cheeser> some drivers/ODMs hide that for you but it's two queries.
[15:00:40] <davonrails> all right isn’t it bad stuff for performance issue ?
[15:02:48] <ernetas> adamcom: but arbiter would obviously not solve this issue, right? For my sharded cluster (2 shards, 2 replica sets with 2 nodes: one backup, one production), would settings.getLastErrorDefaults = {w: "1"} work as for the balancer issue?
[15:03:28] <ekristen> morning
[15:07:36] <ekristen> anyone have experience in here on EC2 with mongodb? I’m getting io saturation near 100% and I’m using provisioned iops drives
[15:12:24] <Guest9408> hi - are there any plans for supported Windows 2012 r2 soon?
[15:13:17] <cheeser> is that even a thing?
[15:14:13] <Guest9408> The OS ?
[15:14:16] <Guest9408> yes
[15:14:50] <Guest9408> supported = supporting
[15:17:57] <aGuest> Windows 2012 r2 was released Oct of 2013
[15:27:17] <aGuest> the reason why I ask is i posted this issue on GoogleGroups: https://groups.google.com/forum/#!topic/mongodb-user/wn-LJoZb3SQ was wondering if anyone else has some input \ ideas
[16:05:36] <Nodex> I have an idea. Use a decent operating system?
[16:05:59] <ekristen> if I enable sharding that on a collection that already has 500gb of data, will mongodb shift data around to other replicasets in the shard, or will the data that already exists on the replicaset stay?
[16:06:31] <Nodex> I think it re-balances for you initialy
[16:07:50] <ernetas> ekristen: if the balancer is enabled, it should rebalance.
[16:09:01] <ekristen> ernetas: ok I’ll take a look at it?
[16:09:55] <ekristen> ernetas: I’m hitting 100% disk io on my primary, I’m trying to determine if adding another replicaset shard and spliting up teh data would help
[16:26:56] <Tug> Is mms backup a proprietary software ? I wonder if it is possible to set my own backup server ?
[16:28:45] <Derick> Tug: it's proprietary, but if you want it locally, it is also part of our subscription service
[16:29:29] <Tug> ok thx Derick
[16:30:14] <Tug> Is this option available from the mms console ?
[16:31:39] <Derick> Tug: Sorry, I don't know that.
[16:32:41] <Tug> ok, do you have a link about the subscription service ?
[16:33:15] <Derick> one sec
[16:33:29] <Derick> https://www.mongodb.com/products/mongodb-subscriptions
[16:33:51] <Derick> you'd need standard or enterprise
[16:33:52] <Tug> Thanks
[16:35:13] <stephenmac7> Is it possible to use a mongodb url with the java mongodb driver?
[16:38:29] <ekristen> what is the default page size for mongodb?
[17:41:07] <arunbabu> I am trying to export a mongodb to heroku and getting assertion: 18 { code: 18, ok: 0.0, errmsg: "auth fails" }. Any clues?
[17:41:18] <cheeser> probably a bad password
[17:42:10] <arunbabu> I tried printing user:pass with console.log(process.env.MONGOLAB_URI || process.env.MONGOHQ_URL); and I am using the same
[17:42:39] <arunbabu> cheeser: anything else can go wrong? I just had it setup
[17:43:01] <cheeser> no idea
[17:46:23] <synth_> is there a way to implement an auto increment value like a counter? i want to be able to assign a numerical id to a document in a collection that is easy for someone to remember versus the _id value that mongo generates
[17:46:47] <cheeser> there isn't
[17:47:16] <arunbabu> do i need to explicitly create a user for mongodb on heroku? process.env.MONGOHQ_URL has a username and password
[17:47:29] <cheeser> you could have a "sequence collection" with a number and use findAndModify() to get an atomically increasing value from it via your application
[17:47:57] <synth_> okay, i'll search around for that kind of a method
[17:51:07] <aGuest> arunbabu: - using mongoexport to getit done ? if so, what is the full syntax you are using?
[17:52:18] <arunbabu> mongorestore -h oce.mongohq.com:1018 -d elix --username heroku --password lnkvccYeEYkjyjNMRex9xS4HwdBKmHPWayurvnLF_RPtTw ~/tmp/mongodump/edlix <- aGuest
[17:52:37] <arunbabu> aGuest: not mongoexport
[17:53:20] <aGuest> ahh ok
[17:53:45] <aGuest> i saw this .. <arunbabu> I am trying to export a mongodb to heroku ... must ahve missed the rest
[17:54:14] <aGuest> and runnign the restore, you get the what error ?
[17:54:21] <aGuest> the same auth fails?
[17:55:10] <arunbabu> aGuest: haa.. got it. I got the db name wrong. I was using my own name whereas I should have used the one in url
[17:55:22] <stephenmac7> If I used a URI to connect to the db using the java driver, how would I go about getting the string name of the selected db (assuming only one was given)?
[17:57:50] <cheeser> ConnectionMetaData, iirc
[17:58:03] <cheeser> oh, oops. not jdbc :D (i.e., wrong channel)
[18:03:14] <aGuest> stephenmac7: when you say string name of selected db, you mean the db you are currently connected to? If so, and you say its only one, then why not use add /database name in the uri
[18:03:47] <stephenmac7> aGuest: No, I have /database in the URI, but I would like to get the name specified in later code
[18:04:02] <cheeser> you won't have a selected db. you'd have a connection to a mongo[ds] on which you'd call getDB() to get a DB reference.
[18:04:28] <cheeser> even with a database in URI, i think that's only really used for auth
[18:04:38] <cheeser> but i could be wrong...
[18:05:57] <stephenmac7> Seems I can use getDatabase, but I've looked at the driver code and determined that it discards the information when the uri is used as an argument to MongoClient
[18:06:07] <stephenmac7> So, I guess I'll have to pass around the database naem
[18:06:09] <stephenmac7> *name
[18:06:24] <stephenmac7> It seems to just authenticate then throw it away
[18:06:32] <cheeser> yep
[18:06:54] <stephenmac7> cheeser: Thanks
[18:07:00] <cheeser> np
[18:07:04] <stephenmac7> I'll just pass around the name
[18:07:12] <stephenmac7> Only need it once anyway, because I'm using Morphia
[18:07:38] <aGuest> runnign jsut db that gives you the current database
[18:08:02] <aGuest> not sure how that translates the driver
[18:12:39] <stephenmac7> aGuest: I don't thing the driver has a current database :|
[22:11:32] <acidjazz> hey all
[22:12:01] <acidjazz> im tryign to sort by a field that only exists in some of my documents.. its a value 1-20 and i want to sort by -1, problem is all the documents w/t he field missing are coming in before ones w/ the field.. any way around htis/
[22:15:49] <acidjazz> im basically looking for a "sort by if exists"
[22:16:21] <VooDooNOFX> What would you expect it to do?
[22:16:58] <VooDooNOFX> Instead of what it's currently doing?
[22:17:52] <VooDooNOFX> Because there are several ways to handle this
[22:26:00] <acidjazz> VooDooNOFX: its currently showing all documents where the field deosnt exist first
[22:26:15] <acidjazz> VooDooNOFX: i want it to sort by the docs that the field exists bbeing 1-100 sortig by -1
[22:26:22] <VooDooNOFX> acidjazz: Yes, that's because they're None, and sorted -1, this makes them first.
[22:26:50] <VooDooNOFX> what you need is a query which limits the results to those that are set in the first place.
[22:27:06] <VooDooNOFX> i.e. Not NULL, None, nil (whatever your language calls it)
[22:31:09] <VooDooNOFX> acidjazz: Perhaps you can show me an example of your current query?
[22:32:15] <acidjazz> VooDooNOFX: exactly
[22:32:42] <acidjazz> VooDooNOFX: well i still would like the not null ones in the results.. just not sorted as 0 at first
[22:32:53] <acidjazz> VooDooNOFX: its pretty simple, sorting by 'weight' => -1
[22:33:17] <acidjazz> 200 docs.. 40 of them have weight 1-40 , i want ones w/ weight sorted 1-40, then all docs w/out weight
[22:34:10] <VooDooNOFX> Then you're merging two queries into one, which can't be done. You either need to set a very high (or low) default weight for all items so they sort correctly, or need to perform 2 queries and merge the results in PHP (Guessing from your syntax)
[22:34:49] <VooDooNOFX> So, if you want your items which don't have a weight to show up last in a reverse sort, then the weight for any item should be set to a default of 0
[22:34:56] <VooDooNOFX> this will push them to the bottom of the results.
[22:36:53] <joannac> acidjazz: I can't reproduce this
[22:37:55] <joannac> acidjazz: http://pastebin.com/HR7Cx27p
[22:38:25] <acidjazz> joannac: add fields w/ no a at all
[22:38:35] <joannac> acidjazz: that's the one with b:1
[22:38:50] <acidjazz> sort by 1 not -1 maybe..
[22:38:52] <joannac> second last line
[22:38:55] <acidjazz> i want my a to go 1-10
[22:39:07] <joannac> ...okay, so you want it ascending
[22:39:12] <acidjazz> yes sorry
[22:39:13] <acidjazz> ascending
[22:39:26] <acidjazz> VooDooNOFX: i see
[22:39:54] <acidjazz> VooDooNOFX: what about the fact that i want ascencion? then instead of 0 i need like 100 then eh?
[22:40:24] <joannac> yeah, give them a dummy value if you want them at the bottom
[22:40:35] <acidjazz> darn ok
[22:40:45] <acidjazz> i need to take a step back then and re-design this
[22:41:16] <VooDooNOFX> acidjazz: Yes, you need to set a default value of weight
[22:41:37] <acidjazz> the non-weighted docs will eventually be in the 100k's
[22:41:41] <acidjazz> so i think thats bad design
[22:41:51] <acidjazz> i think maybe i should reverse my weighted doc sorting
[22:42:15] <VooDooNOFX> acidjazz: that sounds reasonable then
[22:43:24] <ismell1> hello
[22:44:54] <ismell1> I'm using the aggregation framework and I need to conditionally sum some data. I'm using $sum: {"$cond" => [{"$eq": [{"$ifNull" => ["$days_completion_delta", 'N/A']}, 'N/A']}, 0, 1]}
[22:45:01] <ismell1> is there a better way of doing that?
[22:45:20] <ismell1> I guess its more of a conditional count
[23:03:26] <VooDooNOFX> ismell1: what exactly are you trying to do with this query>?
[23:04:42] <ismell1> I want to conditionally count some records in a $group operation
[23:04:55] <ismell1> but if the value is null i want to ignore it
[23:09:24] <VooDooNOFX> ismell1: then your query field should eliminate all fields you don't want to count. i.e. aggregate({$days_completion_delta >= 0}, etc...)
[23:10:14] <VooDooNOFX> err, aggregate($where: {$days_completion_delta >=0}, $sum....)
[23:19:10] <ekristen> I added a new index to the mongo, it finished creating the index, and then started the External Stort
[23:19:24] <ekristen> as soon as that happened, it seems that all reads have stopped working across the cluster
[23:41:21] <ismell1> VooDooNOFX: that would work if I wanted to exclude them all
[23:41:39] <ismell1> but they get used in other parts of the aggregate
[23:46:59] <ekristen> is there anyway to stop an external sort?
[23:47:01] <ekristen> or pause it?
[23:55:40] <VooDooNOFX> ismell1: Unless you're willing to set a default in the group of 0 or 1, then that's the best way I can think of.