[00:42:57] <joannac> jopenhagen: you should at least say which part you're stuck on
[00:47:14] <jopenhagen> when defining an EmbeddedDocument I kept getting an error that it wasn't registered. I just moved it above the Document it was being Embedded into and the problem was solved.
[01:39:50] <MacWinner> when using an ODM's populate mechanism like mongoose, do they avoid a 2nd roundtrip to the database somehow to populate a subdocument? or are they just abstracting the 2 separate connections?
[01:41:43] <cheeser> there is no real concept of a "subdocument" to the server. one document is fetched it just happens to contain this "subdocument"
[02:30:17] <pyios> how do I search for the document inserted latese?
[02:30:51] <pyios> how do I search for the document inserted latest?
[02:34:35] <cheeser> do you use ObjectId for your _id?
[02:40:36] <tejasmanohar> hey how do i mongodump a remote server
[02:40:50] <tejasmanohar> as well as username, password, host, etc. extracted from it
[02:41:20] <tejasmanohar> mongodump --host ... --port ... --username ... --password ... doesnt seem to do
[02:42:19] <joannac> tejasmanohar: what's the output?
[02:42:42] <tejasmanohar> 2015-06-08T22:40:47.866-0400 Failed: error connecting to db server: auth failed but this is the same stuff i used to connect in my code + gui
[05:04:06] <Boomtime> the two subdocuments i just quoted are not equal
[05:04:20] <Boomtime> {a:1,b:1} is NOT equal to {b:1,a:1}
[05:04:32] <pyios> do you mean it will exist two document for the same value?
[05:04:37] <Boomtime> your _id is sensitive to the construction order of the sub-document
[05:05:13] <Boomtime> no, the values are different, if you think that {a:1,b:1} is the "same" as {b:1,a:1} then you have a problem
[05:05:53] <Boomtime> these are not the same, the uniqueness of a field applies at the level it is described at - ANY change below that point is a difference
[05:05:56] <pyios> Boomtime:in my case , {a:1,b:1} is equal to {b:1,a:1}
[05:06:08] <Boomtime> then you cannot use that as a _id
[05:06:33] <Boomtime> because the uniqueness constraint is applied at the _id level, not the field level inside the subdocument
[05:08:31] <pyios> but every time I insert the new document ,I have specified the subdocument
[05:09:06] <pyios> I have specify it is {a:1 b:1},a is always at first
[05:09:24] <Boomtime> and you always use the same driver?
[05:09:37] <Boomtime> because you are also at the mercy of whatever constructs that subdocument
[05:10:22] <Boomtime> like i said, it's risky because you're not in perfect control of the situation - you are depending on behavior that is not guaranteed
[05:11:12] <pyios> do you mean if I change to use other driver ,it will be diff ,although I have specified the subdocument
[05:12:34] <Boomtime> i don't know, JSON does not dictate preservation of order
[05:12:54] <Boomtime> this is what i mean, you are depending on behavior that is not guaranteed
[05:13:06] <Boomtime> will it be different? who knows?
[05:13:58] <pyios> Boomtime:if I use {a:1,b:2} ,you mean it would be inserted as {b:1,a:1} ?
[08:56:57] <Garito> I'm using mongoengine with flask but when I try to delete a document the pymongo's collection.delete returns None instead of a dict
[08:57:22] <Garito> I was asking to the mongoengine people and they think perhaps is a matter of versions or something
[08:57:41] <Garito> did you know if pymongo 2.8 has issues with this topic
[10:22:34] <rasputnik> Siamaster: it'd be a 1-node replica set. still valid (just not very useful). but you can only shard across replica sets, so sometimes they can be useful.
[10:23:27] <Siamaster> I think I'm gonna go with 3
[10:23:44] <Siamaster> but the arbiter server could be a really bad computer right?
[10:23:58] <Siamaster> i mean the cheapest I can get on aws
[10:25:59] <rasputnik> Siamaster: yeah that's how they sell it. it needs to be up if you want to survive a 'real' server failure though.
[11:17:46] <joannac> rasputnik: you can shard with standalones
[11:18:02] <joannac> (not that I would recommend it)
[11:18:17] <bogn> Hi all, can anybody confirm that it is really not possible to let MMS provision AWS machines with PIOPS? It seems they only provision general purpose volumes for you.
[11:49:15] <bogn> regarding my question on root volume size, here's the info from the MMS docs:
[11:49:15] <bogn> Select a Root Volume Size (GiB) large enough for the deployment’s needs. We recommend a root volume of at least 25 GB. The root volume stores the operating system, the downloaded MongoDB versions, and the Automation Agent log files.
[11:51:55] <Siamaster> does that mean that MMS will use only one EC2 will several EBS?
[12:07:23] <Siamaster> If I let MMS provision my EC2 instances
[12:07:44] <Siamaster> How do I know it wont shoot a duck with bazooka?
[12:08:05] <joannac> if that's a metaphor, i don't get it
[12:09:11] <Siamaster> how do I know it won't provision too expensive instances for what I actually need?
[12:09:30] <joannac> because you have to provision things yourself
[12:09:51] <joannac> i.e. "MMS, please provision a m3.large"
[12:10:26] <Siamaster> Then why this question Do you want MMS to provision EC2 instances to host your MongoDB deployment or would you like to provision the EC2 instances yourself?
[12:11:42] <joannac> Siamaster: that's the difference between giving MMS your AWS keys, or provisioning instances yourself and telling MMS about them
[12:31:06] <bogn> how do I specify logpath in MMS? This setting is not available in the dropdown of the cluster. Should I really do that per cluster? For the data path there is the prefix, is there no such thing for the log?
[14:47:44] <makinen> now If I want to set up a replica set and synchronize the data from the old standalone server to the new secondary node how should I set up the set?
[14:48:09] <makinen> should I give a higher priority to the old standalone so it would be elected as a primary node?
[14:56:37] <makinen> what is going to happen if the empty node has been elected as primary?
[15:01:32] <deathanchor> did you rs.initiate on the real primary?
[15:02:42] <deathanchor> if yes, and new member you add with rs.add() would need to catch up to the primary opid before becoming eligible to be elected.
[15:03:42] <deathanchor> you basically can rs.add() then rs.reconfig one to have a higher priority, or just set your primary to have a higher priority now before you add
[15:05:50] <makinen> so if I call rs.initiate() on the standalone and add the secondary with rs.add(). The secodanry will synchronize with the old standalone?
[15:07:05] <makinen> and after synchronizing the elections might occur but there's no risk to lose data
[15:10:25] <Doyle> Hey. Can you use multiple drivers with a single mongodb setup? Is the driver just the connection interpreter? I know there's a separate component that does discovery now...
[16:45:58] <shlant> morning all. Question about ec2 drive options. If I have a read heavy app with a small >1GB dataset, is it even worth it to go with PIOPS? or would SSD be fine? or does magnetic have better IOPS?
[16:46:23] <deathanchor> SSD and Magnetic have IOPS based on the size you pick
[16:46:59] <deathanchor> so if you want lots of IOPS, just get a bigger drive
[16:47:17] <deathanchor> but it's still a shared drive
[16:47:49] <shlant> deathanchor: so magnetic is also IOPS per GB?
[16:48:04] <deathanchor> so someone else could eat up some IOPS from you, hence the provisioned iop drives you get those IOPS no matter what
[16:48:24] <deathanchor> I believe so, just look at the docs on the aws ec2 info
[19:55:32] <Doyle> Does this look about right for a geo distributed replication-set with sharding? https://drive.google.com/file/d/0B5g2nsz5NekdSnB1RVoyeC1FM1k/view?usp=sharing
[20:58:25] <tubbo> hey guys, i've read the docs for mongodb aggregations (http://docs.mongodb.org/manual/core/aggregation-introduction/) but i still don't understand exactly why it behaves so differently from queries
[20:58:53] <tubbo> i'm using mongoid and i'm fairly new to mongo all day long, so bear with me...but we're having problems with an aggregation that takes over 5 sec to respond...
[20:59:10] <tubbo> i want to either get rid of the timeout or optimize the aggregation so that it doesn't time out
[20:59:17] <tubbo> but i can't figure out a great way to do either one of these things...
[21:16:56] <cyrus_mc> Getting FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back . Yet the disk mongodb is on has 1.5 G free
[21:17:16] <cyrus_mc> plenty of inodes free as well
[21:19:21] <GothAlice> cyrus_mc: MongoDB allocates on-disk stripes in power of two sizes. I.e. the first allocation might be 256MB, but the next will be 512MB. Then a GB. (Not exact values, but you get the idea.) Thus: 1.5GB is too small to allocate a new full-size stripe, and it's complaining.
[21:20:30] <cyrus_mc> GothAlice: thank you for the explanation
[21:21:00] <GothAlice> If you "ls -lh" in the MongoDB data directory, you can see the individual stripe sizes. The allocation sizes typically only go up, stripe-after-stripe.
[21:21:23] <cyrus_mc> GothAlice: yep. See that. Allocating 2G files
[21:21:39] <GothAlice> If that's too large, you may be able to switch to using --smallFiles, though I'm not sure what impact that would have on the current stripes.
[21:21:51] <GothAlice> (That option steps it back a few notches on the power of two starting point.)
[21:49:39] <brotatochip> hey guys, so I've created a replica set and I'm in the process of importing a dump from production of ~9.2gb dumped (~34gb live) and it is taking really really long time building an index (about 2 hours) and the IOPS is maxed on the volume (900) with 100% utilization
[21:49:43] <brotatochip> any ideas why this is taking so long?
[21:50:27] <brotatochip> Does mongodb indexing require a shit ton of IOPS?
[23:20:05] <brotatochip> it's also using 30gb of ram
[23:20:33] <brotatochip> so no one has any idea as to the IO demands of MongoDB when indexing?
[23:22:18] <brotatochip> It's been running for 3.5 hours now, on JUST that indexing operation
[23:25:54] <brotatochip> Even at 4000 IOPS this would have taken 47 minutes
[23:26:06] <brotatochip> How the hell does anyone even use MongoDB in production with numbers like this
[23:32:54] <joannac> brotatochip: is it progressing?
[23:47:20] <jecran> hi guys.using node I am just trying to do a simple query using an $or ... can someone please tell me what I am doing wrong? https://gist.github.com/anonymous/150e1262ee753b7e6057 I can read the db fine without the query