PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 15th of December, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:33:49] <jiffe> man this ever gonna finish.. "secs_running" : 118108
[04:09:51] <linocisco> http://pastebin.ubuntu.com/9523887/
[04:51:49] <linocisco> http://pastebin.ubuntu.com/9523887/
[04:53:57] <joannac> linocisco: yes. is there a question?
[05:09:12] <linocisco> joannac, what is wrong and why?
[05:14:08] <joannac> linocisco: there is a line in your log starting with "ERROR" which quite clearly states mongod can't start because the directory it puts its files in by default (\data\db) does not exist
[05:14:14] <joannac> create that directory
[05:14:29] <linocisco> joannac, ok
[05:14:37] <joannac> alternatively, start mongod with the --dbpath parameter
[05:51:41] <devians> is there any way to list all users of all dbs? i'm trying to create my first admin user by utilising ansible but it seems to create the db but not the user
[05:52:09] <joannac> other than cycling through them, no
[05:52:15] <joannac> (assuming mongodb 2.4.x)
[05:57:49] <devians> hmm. ok, well the thing thats really perplexing me at the moment is that I cannot seem to login with users I create
[06:00:47] <joannac> devians: huh. gives you "auth failed"?
[06:00:56] <joannac> devians: what version?
[06:01:20] <devians> 2.6.6, i think its my error not mongos.
[06:01:42] <devians> i'm just trying to get my first steps going, ie a user admin and then a root account and then a db specific account
[06:01:53] <devians> i've managed the first and logged into the admin db only with it.
[06:04:47] <joannac> okay. then what?
[06:06:05] <devians> I guess I'm just trying to get my head around how to administrate mongo. I'm deploying the server via ansible and ansible provides a mongo user module but as far as I can make out, for whatever reason it doesn't work. so I'm trying to get my head around the scripting stuff so I can deploy my users via a javascript script.
[06:06:55] <joannac> right. might I suggest doing it by hand in the mongo shell, and then trying via the javascript script?
[06:11:21] <devians> yeah i've gotten that far more or less
[06:11:59] <joannac> okay
[08:36:38] <kali> hey, can somebody point me to a reference doc helping interpretation of explain() and profiling data ? i'm specifically curious about stuff like COLSCAN, KEEP_MUTATION and so on...
[08:39:22] <reese> hi
[08:42:41] <Guest44126> i try to remove shard01 which is primary shard, but still i have 5 chunks which is not draining, could i move primary shard to another or no? http://pastie.org/9781256
[10:11:35] <Guest44126> is there anyone?
[10:15:36] <sirfilip> morning
[12:33:47] <Guest44126> i have problem with remove shard could u help me?
[12:34:07] <flyingkiwi> Guest44126, state the problem
[12:36:26] <Guest44126> i cannot remove shard because i have move chunk error: http://pastie.org/9781623
[12:38:02] <Guest44126> flyingkiwi, do you know whats wrong?
[12:43:40] <arussel> If I understand properly, a mongo driver will know the type of a field depending of a code byte for the field. What would it be for a field with value: NaN ?
[13:18:18] <kali> arussel: i'm not sure what you're asking
[13:18:22] <kali> NaN has a type
[13:18:46] <arussel> kali: what is its type ?
[13:19:20] <arussel> my driver (reactive mongo) try to deserialize it a Number and throws an exception
[13:21:00] <kali> arussel: NaN is a special value for the floating points types. so in mongo context my guess would be it's a double
[13:21:28] <arussel> so reactive mongo is write to think it is a Number but shouldn't call .toDouble on it.
[13:21:41] <arussel> kali: thanks
[13:31:23] <Guest44126> how to fix this problem: http://pastie.org/9781623 ?
[13:58:30] <Fractalien> Hi all! I am surprised that given an index on (a,b) and a fixed choice for a, $or:[{b:1},{b:2}] doesn't use the index whereas b:{$in:[1,2]} does. Should I not be? I couldn't find an open ticket about this although this behaviour is still present in 2.6.6...
[14:24:18] <remonvv> NaN.toDouble() should work
[14:24:48] <remonvv> Fractalien: Paste your index and query
[14:25:24] <Fractalien> use test;
[14:25:24] <Fractalien> db.createCollection('indexTest')
[14:25:24] <Fractalien> for (var i=1; i<=10; i++) { for (var j=1; j<=10; j++) { db.indexTest.insert( { a:i, b:j } ) } }
[14:25:24] <Fractalien> db.indexTest.ensureIndex({ a:1, b:1 })
[14:25:24] <Fractalien> db.indexTest.find({a:4, $or:[{b:1},{b:2}]}).explain() // uses index(a)
[14:25:24] <Fractalien> db.indexTest.find({a:4, b:{$in:[1,2]}}).explain() // uses index(a,b)
[14:25:41] <cheeser> use a pastebin
[14:26:49] <Fractalien> Sorry... http://pastebin.com/mRDcjqVe
[14:31:55] <jiffe> well the process has changed from checkShardingIndex to splitVector
[14:39:12] <cheeser> Fractalien: i get this for both queries:
[14:39:12] <cheeser> "cursor": "BtreeCursor a_1_b_1",
[14:40:18] <Fractalien> Yes, I misworded this a bit. The index is used in both cases, but only in the $in case is it used for the condition on b.
[14:40:30] <Guest44126> how to fix this problem: http://pastie.org/9781623 ?
[14:43:02] <remonvv> You'd have to rewrite it to top evel $or I think, I've run into this before, so : {$or:[{a:4, b:1}, {a:4, b:2}]}
[14:43:40] <remonvv> Not sure if it technically cannot use the "b" part of the index or if MongoDB simply doesn't do it (yet).
[14:44:15] <remonvv> Pretty sure $in is the better implementation though.
[14:44:26] <remonvv> Oh, he's gone.
[15:38:38] <jiffe> is splitVector the last operation in an sh.shardCollection call?
[15:39:13] <jiffe> from the sound of it splitVector takes as long as checkShardingIndex so it probably won't complete until tomorrow night
[16:04:48] <yzhao> what is the proper operator that updates a document based on the existence of some field in the document?
[16:12:52] <jiffe> http://docs.mongodb.org/manual/reference/operator/query/exists/ ?
[16:16:59] <yzhao> thanks!
[16:17:14] <yzhao> ps if mongodb ever plans on supporting multi-document transaction semantics, i’d give you guys a cookie
[16:17:18] <yzhao> i know we can roll our own
[16:17:21] <yzhao> but it’s painful.
[16:39:57] <yzhao> also, I need to remove some fields from a document, but the values of these fields is determined by values from another field in the same document
[16:40:07] <yzhao> how do I make such references?
[16:40:20] <cheeser> say what now?
[16:41:37] <yzhao> cheeser: http://pastebin.com/yf4z0e6t I have a graph structure, and if I remove node “a”, I want to remove edges that point to a from node “b"
[16:41:43] <yzhao> but I want to do it atomically
[16:41:45] <yzhao> in a single query
[16:42:04] <cheeser> i don't think you can in mongo
[16:42:07] <wayne> hi.
[16:42:27] <cheeser> hrm.
[16:42:28] <wayne> if i have docs indexed by a datetime field, and the datetimes are exactly the same
[16:42:49] <wayne> does sort guarantee any order?
[16:42:49] <cheeser> your index is useless
[16:42:58] <cheeser> depends on your sort key
[16:43:08] <wayne> on the datetime field
[16:43:17] <wayne> i suppose i could use a secondary sort key
[16:43:27] <cheeser> you'd have to
[16:43:28] <wayne> if that's unique, then my hunch is that the order will be preserved?
[16:43:50] <yzhao> http://stackoverflow.com/questions/3788256/mongodb-updating-documents-using-data-from-the-same-document/3792958#3792958
[16:45:46] <yzhao> ugh if i need to implement a mutex using mongodb i’m going to kill myself
[16:49:25] <wayne> is it safe to assume constant time $max query on an indexed field?
[16:56:09] <alexi5> hello
[18:23:38] <GothAlice> yzhao: Sad to say, but I suspect your issue boils down to using a non-graph database as a graph database. It is never recommended to do this; similar to needing multi-document transactional support right now, there are better tools for the job. :/
[18:24:40] <yzhao> GothAlice: lol SQL?
[18:24:47] <GothAlice> (Storing graph data in a non-graph DB killed my last project after we got hackernews'd and lifehacker'd the same week. Geometric slowdown with the increase in users and some unwise choices about how to bulk process those connections.)
[18:24:55] <GothAlice> No, use a graph database. A real graph database.
[18:25:19] <GothAlice> http://readwrite.com/2011/04/20/5-graph-databases-to-consider < neo4j is popular, we it at work now.
[18:26:20] <GothAlice> "I want to store a graph" should always resolve to "I need a graph database" in one's brain. "I need strong transactional and relational guarantees" should always resolve to "I need a relational database that supports transactions", etc.
[18:27:37] <yzhao> my graph’s are pretty small though
[18:27:39] <GothAlice> As for storing extremely light graphs (i.e. where you never query more than one edge away) MongoDB can be acceptable, but instead of ensuring atomic "all of these references are gone" to ensure queries always get back good data, flip it. Handle the "eventually consistent" data gracefully.
[18:27:45] <yzhao> maybe on the order of ~1000 nodes ~10k edges top
[18:28:25] <GothAlice> Do you ever need to query more than one association deep?
[18:28:56] <yzhao> Well I’m storing the whole graph in a single document
[18:28:59] <yzhao> 16MB is more than enough
[18:29:05] <yzhao> so I get atomicity that way.
[18:29:15] <GothAlice> So you're not using any MongoDB-side queries at all, then? :/
[18:29:30] <yzhao> But it doesn’t solve the problem of me needing to update a field in the graph by using another field
[18:29:37] <yzhao> (within the same document)
[18:30:04] <yzhao> not really no
[18:30:08] <GothAlice> You have to load, then update. You can limit the load to just the field you need, and you can also use aggregate queries or map/reduce to output a new, modified collection.
[18:30:09] <yzhao> i don’t even need DFS/BFS
[18:30:47] <GothAlice> Obviously, though, if you're storing a complex nested structure, you might not even be able to load just the field you need. Your choice to ignore the fact that MongoDB is a data storage engine is somewhat limiting, here.
[18:31:06] <yzhao> well alternatively I can use eval()
[18:31:07] <yzhao> which is a DB lock
[18:33:07] <GothAlice> The next major version of MongoDB is planned to have a new storage engine capable of per-document locks. In your setup (monolithic document) you'd gain no benefit from this. :/ Surely you don't need the entire graph every time you intend to ask the graph _any_ question?
[18:34:27] <yzhao> per-document locks for what?
[18:34:30] <yzhao> for eval()?
[18:34:31] <GothAlice> Write.
[18:35:37] <yzhao> oh
[18:35:40] <yzhao> I see
[18:35:43] <yzhao> never mind I can’t use eval either
[18:35:56] <yzhao> because evals can die half way through and i’m boned
[18:36:28] <GothAlice> Thus the general approach of instead of worrying overly about ensuring your data is always consistent, handle the case where the data *isn't* consistent gracefully.
[18:36:46] <GothAlice> This is also the general approach because replication is an eventually-consistent process. There is replication lag.
[18:37:08] <yzhao> that’s harder
[18:37:08] <yzhao> i’d argue
[18:38:43] <yzhao> I wonder how cayley does it
[18:38:48] <yzhao> since it supports MongoDB as a backend
[18:39:39] <GothAlice> Most graph implementations on MongoDB have a collection storing edges, or embed in node documents the connections they have (depends somewhat on if your edges are bidirectional or unidirectional).
[18:39:50] <GothAlice> AFIK none go the "monolithic" approach, since that approach is unmaintainable.
[18:42:37] <GothAlice> http://www.slideshare.net/fehguy/building-a-directed-graph-with-mongodb is one potential resource.
[18:51:34] <jhonkola> What privileges are needed in mongo2.4 (tokumx 2.0) for mongorestore to restore single collection without dropping?
[18:51:43] <jhonkola> the dump is from mongo 2.6
[18:54:51] <yzhao> is there a smart way to unset all fields X whose sub field Y, ie. X.Y is equal to Z?
[19:00:24] <cheeser> db.coll.update({"x.y" : z}, { $unset : { x : "" } }, false, true);
[19:01:21] <yzhao> but I don’t know x apriori
[19:01:25] <yzhao> like x is a wildcard
[19:05:24] <cheeser> um. what?
[19:05:32] <cheeser> you build your update statement when you know x
[19:05:57] <yzhao> i only want to update the X’s where X.y is equal to some Z
[19:06:24] <cheeser> i get that
[19:06:33] <yzhao> X is a field
[19:06:41] <yzhao> Also, this is all within a single document
[19:06:46] <cheeser> ok. how does that update statement not apply?
[19:07:52] <jhonkola> What privileges are needed in mongo2.4 (tokumx 2.0) for mongorestore to restore single collection without dropping?
[19:08:04] <tjbiddle> How can I repair a database that has journaling enabled? (At least I think it has journaling enabled - how can I check as well? I see a file in the journal/ folder)
[19:08:08] <jhonkola> I have tried using admin but no luck
[19:09:31] <yzhao> cheeser: here’s a more concrete example. http://pastebin.com/scHYnTAG When I remove the node “a”, I also want to unset the field b.to.a
[19:09:43] <yzhao> but I don’t know the identity of b a priori
[19:11:01] <cheeser> like i said, this morning, i don't think you can do that in statement in mongo
[19:13:58] <diamonds> how to disable mongo starting automatically on Ubuntu?
[19:14:47] <diamonds> apparently the startup script isn't "Linux Standard Base" (it doesn't have start/stop tasks) so `sudo update-rc.d mongod` fails: http://hastebin.com/raw/aqotutasus
[19:15:28] <jhonkola> What privileges are needed in mongo2.4 (tokumx 2.0) for mongorestore to restore single collection without dropping?
[19:16:13] <jhonkola> I have tried using admin but no luck. Admin has readWriteAnyDatabase and UserAdminAnyDatabase and ClusterAdmin which are enough for everything I do on shell
[19:17:38] <cheeser> jhonkola: please stop repeating. if someone could've answered (s)he would have.
[19:18:08] <jhonkola> ok
[19:20:46] <diamonds> nm found it: http://upstart.ubuntu.com/cookbook/#disabling-a-job-from-automatically-starting
[19:22:49] <tjbiddle> Will deleting at 16kb journal file be okay? I really don't care about the data - it's a load test instance. As long as it doesn't render the machine even worse off then I'm okay
[19:23:41] <tjbiddle> Current issue is "Deleted record list corrupted in bucket .... throwing fatal assertion"
[19:26:25] <jmar777> Would anyone in here mind taking a peek at my question on SO? http://stackoverflow.com/questions/27489973/how-to-simply-this-aggregation-framework-query-with-date-formatting-compariso
[19:26:55] <jmar777> I'm feeling like the query is excessively complex relative to what I'm trying to accomplish. Am I mising something obvious?
[19:51:31] <jhonkola> mongorestore is not finding a user when trying to restore
[19:52:29] <jhonkola> I have veerified that the user exists in db.system.users
[19:52:47] <jhonkola> The user is in 2.2 format (with readOnly=false), with "user" field set to username
[19:53:09] <jhonkola> mongorestore is according to log looking for username@Database and not finding it
[19:53:44] <jhonkola> could this be a mismatch between 2.2 user and 2.4 mongorestore (actually tokuMX 2.0)
[20:40:21] <zamnuts> using $ne on an indexed string field, explain says it uses BtreeCursor, but millis are still high; 15s with $ne vs 0.189s w/o $ne; i'm wondering if I can do a $lt:'value' || $gt:'value' instead of $ne - has anybody tried this before?
[21:01:01] <zamnuts> for the record, $or:[{field:{$lt:'value'}},{field:{$gt:'value'}}] behaves like field:'value' but functions like field:{$ne:'value'}
[21:01:16] <zamnuts> in terms of performance w/ an index!
[21:02:00] <zamnuts> i'm still curious if anybody has used a similar work-around in a production env and how is it performing? also, i'm inquiring about any possible caveats i'm not seeing?
[21:10:04] <engirth> how can I insert documents into a collection and put them always on top. I uderstnad that I can do a sort({$natural: -1}) but curious if I can insert them alwasy on the top
[21:14:33] <zamnuts> engirth, you can't really control how documents are sorted when inserted... mongodb will put them wherever there is avialable storage on disk, and when you retrieve them, it'll be in that order
[21:15:08] <zamnuts> engirth, so the only way to control order is via sort({_id:-1}) or similar...
[21:16:28] <engirth> zamnuts: thank you
[21:46:17] <boko123> hello
[21:47:03] <boko123> I’ve got a question about adding an element to an array while simulaneously modifying another element in that array. namely, is this possible in one operation?
[21:47:55] <GothAlice> boko123: Aye; you can. You'd need to be very careful about the order in which operations are applied, however.
[21:50:38] <boko123> @GothAlice. I’m trying to do this without overwriting the entire array. I’m getting an error if the operations is split into {$set: {doc.0.title: ‘new name’}, {$push: {doc:{0:{title: “another name”}}}}
[21:51:14] <boko123> I get a “Cannot update ‘…’ and '...' at the same time”
[21:51:28] <GothAlice> That is a highly confusing thing.
[21:51:38] <GothAlice> Is "0" a valid field name in your structure?
[21:52:09] <boko123> @GothAlice: sorry, I’m transliterating my query from a log statement with a different format
[21:53:56] <boko123> that’s the jist of my query though, 2 separate parts, one in the $set to modify an existing element, the other in a $push or a $pushAll or an $addToSet to add the other element
[21:54:23] <boko123> GothAlice, I’d love any tips on how to do this operation in one query
[21:57:28] <GothAlice> Hmm. Could have sworn that was possible. In testing, it appears not.
[21:57:45] <GothAlice> http://cl.ly/image/2f3s3L2R3h0w
[22:00:58] <boko123> GothAlice, yep, that’s what I’m seeing. Seems like there should be a way to do this
[22:02:00] <GothAlice> I would have thought that the update runner would evaluate the changes in (SON) order, allowing such potentially ambiguous updates to be expressed unambiguously. :/
[22:02:19] <GothAlice> Sounds like something to suggest on JIRA, though, boko123. ;)
[22:03:06] <boko123> GothAlice, will do. Thanks for your time investigating it.
[22:07:14] <dvz> what could be the reason if mongodb is working properly most of the time, but when getting heavy load it returns nothing to the web app (node.js), however local command line mongo command works fine, and i am able to query/insert data
[22:07:27] <dvz> the issues start after the heavy load
[22:08:13] <dvz> there are several user facing apps that use the same mongodb server, and all of them get empty result sets from mongodb and can’t insert anything
[22:08:20] <dvz> what could be causing this? any ideas?
[22:46:55] <zamnuts> dvz, are these find operations? are you working with the cursor? are you sure it is not a race condition?
[23:02:16] <Starcraftmazter> hi
[23:02:25] <Starcraftmazter> what are some aspects of a project that would make it a good fit for mongo?
[23:09:58] <zamnuts> question: with data structure {arr:[{prop:'value'}]} and index {'arr.prop':1}, why doesn't the selector 'arr.0.prop' use the index?
[23:21:48] <delinquentme> so I've got an output .json object ... and using mongoimport fails to separate out the top level items ... and everything is jammed into a single item in the collection
[23:24:19] <Boomtime> zamnuts: I assume because the query does not match the index - the index does not know the ordinals involved, so at best the index could only be used to reduce the number of matches to those documents which have a match (not necessarily at index zero)
[23:25:51] <zamnuts> Boomtime, that is what i figured; assuming the query planner could determine the correct index based on the ordinal, is there reason to believe using the multikey index could be possible easily? I'm not familiar with mongodb's index internals...
[23:26:31] <zamnuts> i managed to find this improvement request, open since 2011 :( https://jira.mongodb.org/browse/SERVER-3368
[23:28:15] <Boomtime> zamnuts: regarding "arr.0.prop" do you want it to match this document: { arr: { "0": { prop: "value" } } }
[23:28:57] <zamnuts> Boomtime, that document is not much different than {arr:[{prop:'value'}]}, no?
[23:29:14] <Boomtime> my question is: do you think it should match?
[23:29:21] <zamnuts> i think it should match
[23:29:33] <Boomtime> but it is not in the index { arr: 1 }
[23:29:41] <zamnuts> oh snap... i see what you did there.
[23:29:45] <Boomtime> :D
[23:30:11] <Boomtime> you can create an index just for "arr.0.prop" btw
[23:30:30] <zamnuts> Boomtime, but {'arr.prop'} shouldn't match the "0" document in your example either!
[23:30:57] <zamnuts> Boomtime, i did that, but now i have one for BOTH arr.prop and arr.0.prop, which seems redundant
[23:31:11] <Boomtime> yeah, that's annoying
[23:31:25] <zamnuts> s/{'arr.prop'}/index 'arr.prop'/
[23:31:56] <zamnuts> unnecessary RAM usage IMO.
[23:32:19] <Boomtime> you know the problem, what is the solution?
[23:32:30] <Boomtime> arr.0.prop is not the same thing
[23:33:09] <Boomtime> it sounds like you are using index zero for something special, you should consider the problem might be your schema
[23:33:36] <zamnuts> with {arr:{0:{prop:'value'}}} it IS different, i agree there, but with that, the index arr.prop is invalid too, since arr has no property 'prop'
[23:34:18] <zamnuts> i agree, the schema may be the problem, i track a timestamp in 'arr' and $sort:{timestamp:-1} on $push
[23:34:29] <zamnuts> so index 0 always has the latest iteration
[23:35:23] <zamnuts> this is specifically a running log-type collection... but the "cached"/"rendered" version, the raw "log" is just a series of partial documents in a different collection
[23:35:42] <zamnuts> trying to avoid unecessary aggregation in most cases
[23:35:50] <zamnuts> ...if that makes sense?
[23:38:55] <Boomtime> I have several points of interest in the above:
[23:39:29] <Boomtime> "the index arr.prop is invalid too, since arr has no property 'prop'" <- the index is perfectly "valid", it just has a null entry for that document
[23:40:02] <zamnuts> delinquentme, is your json file an array, or a series of json objects? see http://docs.mongodb.org/manual/reference/program/mongoimport/#cmdoption--jsonArray
[23:40:19] <Boomtime> "timestamp in 'arr' and $sort:{timestamp:-1} on $push" + "so index 0 always has the latest iteration" <- you $push the latest?
[23:40:33] <Boomtime> $push appends at the end of an array
[23:40:34] <zamnuts> Boomtime, correct on validity of the index - my bad on wording there, i completely understand that
[23:41:49] <zamnuts> Boomtime, correct, i $push the latest: $push:{arr:{$each:[...],$sort:{timestamp:-1}}}
[23:42:15] <zamnuts> since arr.$last.prop (or similar) does not exist in mongodb, the latest is always 0
[23:42:30] <Boomtime> ah, $sort
[23:42:49] <Boomtime> Apparently I skimmed over that bit
[23:42:53] <zamnuts> ;)
[23:43:16] <zamnuts> work-around for lack of $unshift
[23:43:41] <Boomtime> $pop?
[23:43:58] <zamnuts> $pop is removal, no?
[23:44:01] <Boomtime> yes
[23:44:09] <delinquentme> zamnuts, $ mongoimport --jsonArray --db hackernews --collection keywords --file hn_kw_data.json
[23:44:10] <Boomtime> what is "unshift" then?
[23:44:47] <zamnuts> push -> add to the end, pop -> remove from the end, unshift -> add to the beginning, shift -> remove from the beginning
[23:44:53] <delinquentme> I was under the thinking that I could literally just dump .json files into mongodb
[23:45:17] <delinquentme> and I've verified that its correctly formatted json ( using pythons json.load(filename.json)
[23:45:19] <Boomtime> $pop can remove from the beginning
[23:45:23] <zamnuts> delinquentme, you can, but it depends on the format! mongodb expects {}\n{}\n{}... by default
[23:45:49] <Boomtime> "The $pop operator removes the first or last element of an array." -> http://docs.mongodb.org/manual/reference/operator/update/pop/
[23:46:32] <zamnuts> Boomtime, yea $pop can with a -1, i'm ADDING tho, which would be an unshift
[23:46:39] <delinquentme> zamnuts, Oh top level items are only separated by newlines?
[23:47:46] <zamnuts> TBH, i'm not sure about \n _exactly_, but top level items (or documents if you will...) are separate, however you may be working with a file like [{toplevel},{toplevel},{toplevel}] in which case --jsonArray is required
[23:47:52] <zamnuts> delinquentme, ^
[23:48:16] <delinquentme> maybs im on an old version ...
[23:48:17] <Boomtime> zamnuts: if you are using 2.6 then you can $push at the start
[23:48:35] <delinquentme> MongoDB shell version: 2.6.5
[23:48:51] <Boomtime> probably then, what is the server version?
[23:49:31] <delinquentme> 2.6.5
[23:49:53] <zamnuts> Boomtime, $position:0 will insert at position 0 and not ovewrite? orly....
[23:49:59] <delinquentme> and its only a 4.1M .json file ... so that shouldn't be an issue
[23:52:24] <zamnuts> delinquentme, is your json file just a big array? [...] ?
[23:53:21] <delinquentme> zamnuts, {"2011": {"February": {"comreal": 1, "limited": 1, "yammerjoe": 1,
[23:53:41] <delinquentme> would it be better if I cap it in square brackets?
[23:58:56] <zamnuts> delinquentme, probably not, then you'll have to remove the extra commas :)
[23:59:03] <zamnuts> does --jsonArray not work?