[02:51:13] <fullstack> How would someone go about using Amazon AutoScale with Replica set ?
[03:37:38] <GothAlice> fullstack: Not sure about the specific technology; scaling a replica set horizontally only gives you additional read access for queries already being directed at secondaries. If you need to scale writes, you also need to scale in replica-set-sized multiples and configure multiple sets in a sharded array.
[03:38:12] <fullstack> Is that what compose.io does for $$$?
[03:45:59] <GothAlice> fullstack: joannac was kind enough to remind me of this article: http://askasya.com/post/canreplicashelpscaling
[03:46:50] <GothAlice> fullstack: In general, sharding (with very well-chosen sharding keys) is the way to go in terms of scaling.
[03:47:46] <GothAlice> Or: optimization. Measure things, find out what's taking up a lot of room, and refactor. http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework demonstrates how structuring your data can affect both query performance and on-disk size, as a dual comparison.
[03:53:46] <GothAlice> fullstack: And yes, that's what compose and other "cloud NoSQL" services do.
[05:00:45] <zhulikas> so it's easier to search without going into nested object structure
[05:00:52] <zhulikas> and then matches structures by their _id field
[05:01:22] <zhulikas> okay, meaning i need to figure out a way to extend that moment when an object in application is passed on to searchindex collection and make sure my other fields for indexing are added there
[05:01:32] <zhulikas> this approach is rather brilliant
[05:16:20] <zhulikas> i'm trying to extend this application to support more search parameters so i just need a trigger to reload searchindex collection whenever i add a new search parameter
[05:38:40] <sahilsk> I've set quotaFiles to 2 and trying to fill the database. It created one 64MB file and after a while throw error on insert saying quotaLimit exceeded. Why so? It created only one datafile yet??
[05:40:01] <sahilsk> Second confusion is. When i create a new collection under the same database, it let me create it. And it also created a second datafile (128MB). I'm able to insert data in this new collection. Does quotaFiles has to do with Collections? One datafile for one collection?
[05:44:13] <sahilsk> now it has created a third datafile of size 256M , even when i set quotaFiles to 2 only. Why the hell mongodb is not enforcing quota here ?
[05:45:25] <sahilsk> I'm using one liner for filling up data : while( i < 10000000) { db.dummyData1.insert( { a: 2, b: "sodfndufd"}) ; i++ };
[06:08:45] <sahilsk> now i created another collection. and it's letting me insert there :-/
[06:13:03] <sahilsk> On new collection it Let me insert 62 records , thereafter error of quota exceeded
[06:14:17] <sahilsk> Now increasing quotaFiles to 1 and testing same
[06:20:03] <joannac> probably something to do with how the docs are actually stored on disk
[08:11:23] <Constg> Hello, I've upgraded MongoDB to 3.0.2 using MMS, but when I'm connecting to the shell, it still says MongoDB shell version: 2.6.9. Any idea why? Shouldn't it be 3.0.2 too?
[08:13:44] <sahilsk> Constg: make sure you're checking version of right thing. Mong client or Mongo Server??
[08:17:04] <Constg> well, I go on the server, and type mongo (with the port). I can see I'm on the Primary per example
[08:45:02] <sahilsk> Constg: try this as well : db.version()
[09:54:06] <spuz> hi, I'm trying to use the mongoshell to execute a find command but I'm not having any luck
[09:56:45] <spuz> I've tried 'mongo host/database --eval 'db.Collection.find(_id:"foo")'' but it simply prints the command that I gave it rather than the output
[11:15:03] <Nomikos> does this help? I don't know what powershell is exactly, but http://superuser.com/questions/176624/linux-top-command-for-windows-powershell
[11:16:27] <den21> I'm looking for something like the 'InsertFlags.ContinueOnError' for InsertBatch of previous c# mongo driver. Anyone knows here?
[11:16:37] <den21> Using c# driver, I wanted to insert multiple documents(index set on a key). I'm using the 'InsertManyAsync', but when a duplicate based on index occurs, I get an error "A bulk write operation resulted in one or more errors."
[11:16:43] <den21> I can't find anything in documentation that will just continue inserting the rest of the documents, ignoring the duplicate/error.
[11:18:31] <den21> await collection.Indexes.CreateOneAsync(Builders<BsonDocument>.IndexKeys.Ascending("u"), new CreateIndexOptions { Unique = true});
[11:26:40] <pamp> nkow how many cores are been using for Bulk writes operations
[12:08:19] <amitprakash> Hi, is it possible to see document counts vs aging buckets using aggregations?
[12:08:36] <amitprakash> i.e. x documents < 1 day old, < 3 days old, 3+ days old
[12:28:24] <deathanchor> amitprakash: as a single aggregation command? I don't think so, but if you indexed by some time you can just do a match on the buckets of time you want.
[12:41:49] <amitprakash> deathanchor, I've indexed on a field containing an ISODate. However, I don't see how I can group it to buckets of [, now - 24h], [-24h ,- 72h], [-72h, ]
[12:42:52] <amitprakash> deathanchor, you mean 3 separate queries for each bucket?
[12:47:46] <XThief> hi, im new to mongodb and im configuring a server for production, do I get a big benefit if I use SSD instead of a regular 7200 RPM hardrive when all my queries are really simple and are indexed?
[12:57:39] <deathanchor> hey is there a char for comments in the mongo shell?
[14:24:30] <fxmulder> so I started a new replica member and its pulling over data, when I first started it was doing about 2000 objects per minute but now looking at it a couple days later it is doing about 100 objects per second, any idea why that would slow down so much?
[14:24:37] <fxmulder> 100 objects per minute that is
[14:28:48] <pamp> first I querying a collection a than insert all docs into another
[14:29:57] <GothAlice> pamp: You may be dogpiling your primary. Unacknowledged write concern also means you're ignoring all possible errors that could arise.
[14:30:27] <GothAlice> pamp: By "dog pile" I'm referring to the accumulation of operations as the disk IO can't keep up with network IO.
[14:32:26] <likarish> I'm working on designing a schema and running into a problem. Here's what a document currently looks like: {_id: xxx, entries: [{_id: 1, ver: 2}, {_id: 2, ver: 10}]}
[14:33:15] <likarish> I'd like to be able to update entries so that if entries._id is not found, then it will insert an element. If the element exists, then it would set the new ver.
[14:33:54] <likarish> From what I've read, that doesn't sound possible. Any suggestions on how I could change this schema for this operation?
[14:36:12] <likarish> How it's designed now, I'd need to be able to upsert an element in the entries field.
[14:44:03] <likarish> Think I'll end up doing something like the answer G-Wiz gives. http://stackoverflow.com/questions/8871363/upsert-array-elements-matching-criteria-in-a-mongodb-document/21347039#21347039
[14:46:12] <GothAlice> likarish: As a note, if you do that, you lose any effective ability to use indexes on that nested data.
[14:46:24] <GothAlice> That can be a bit of a downer.
[14:47:15] <pamp> GothAlie, yes, I think my problem is Disk IO
[14:47:50] <likarish> In design phase, so any alternatives I could look at?
[14:47:51] <GothAlice> pamp: Using a higher write concern (i.e. confirmed-journal) will make your insertions operate more lock-step with disk IO.
[15:08:33] <juliofreitas> Hi everyone! I saw the "TimeSeries data" at MongoDB Webinars but I don't know how to create the collection with times, how to index by time and when I have the value, I'll update the field. Could anyone send me a tutorial?
[15:08:33] <juliofreitas> I've a file that each 15 minute is updated. I'll parse and update my collection. The schema is: 2015-04-15T00:15:00Z (time), city, state...
[16:14:58] <deathanchor> is there a mongo command for checking if a db exists without creating it?
[16:18:51] <fxmulder> found the problem, elasticsearch was using a large load of memory and causing things to swap on one of the machines
[16:20:09] <fxmulder> back to around 2000 objects per second
[16:27:14] <derbie> Hey guys! A little mongo / mongoose issue: Considering there's a collection cars and another one logs. I have an array of unique carIDs that i need to query the logs collection and get the latest entry for each carID
[16:27:25] <derbie> What's a better way than to loop through each ID and query the DB for each car?
[16:32:56] <pamp> Its possible separate index and db's in different drives?
[16:36:27] <derbie> Considering an array of UserIDs and a collection `logs` with structure (_id , userid, date, aValue); What's the most efficient way to query the database for most recent (date) entry for each user in the array?
[16:59:20] <bdrewery> I realize there's no transactions. I want to run an update that may damage data. Is there a good way to do this and verify the data before it is finally committed, without just shutting down mongod and "copying" the db file first?
[17:00:43] <cheeser> do a mongodump, mongorestore to a test db. test there.
[17:48:51] <pamp> Its possible separate index and db's in different drives?
[17:58:49] <cheeser> pamp: there's this http://docs.mongodb.org/manual/reference/configuration-options/#storage.directoryPerDB sort of along those lines
[19:21:16] <ngl> I am told this is "done!", but it is not in my "templates" gfs bucket. What is wrong?
[19:21:20] <ngl> mongofiles -d svm -c templates put master-flash.sh
[19:23:12] <ngl> Please? I looked up and pointed at the damn Canadian geese that are pecking cars here and my neck snapped and I can't look right so it would be really nice if somebody could help a brother out so I can go try to get fixed.
[19:37:46] <Siecje> What is the recommended way to add attributes when a relationship is added?
[19:46:47] <ToeSnacks> I have been draining a shard in my cluster for 3 days and the chunks count has not gone down at all, how can I verify Mongo is actually draining and not in a hung state?
[20:11:24] <yhager> I've dropped a collection, but it's still there - taking up disk space in /var/lib/mongodb, and counted in dbStats - but not shown on 'show collections'. how can I get truly get rid of it?
[20:25:11] <gswallow> I'm running through some tests because our lead dev said to do it, but I can't find any reason *why* I'm testing this other than anecdotes in Google Groups. Why does mongoexport —eval '{ $and: [ { "_id": { $exists: true } }, ….' appear to run so much faster than mongoexport using indexes?
[20:48:26] <tejasmanohar> anyone familiar w/ connecting to mongo from console?
[20:48:51] <tejasmanohar> i have the URI mongodb://something:somethingDEV@ip/something-dev
[20:50:21] <tejasmanohar> when i do `mongo {{ that URI }}` it says invalid port number something:somethingDEV@ip in connection string {{ whole string }}
[20:52:36] <cheeser> mongo --user something --password something --host ip something-dev
[20:52:58] <tejasmanohar> oh you cant use the URI, cheeser ?
[20:53:40] <cheeser> doesn't look like you can from the cli
[22:34:03] <pjammer> evening. Is there a method equivalent to rs.uninitiate() for those of us who add more than one rs.initiate() in a replica set?
[22:52:14] <blizzow> Can I use mongodump to backup all of my collections/dbs via a mongos instance for a replicated sharded cluster? (I don't particularly care about the local DBs on each replica set.)
[23:53:37] <ToeSnacks> I have been draining a shard in my cluster for 3 days and the remaining chunks count has not gone down at all, how can I verify Mongo is actually draining and not in a hung state?