PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 22nd of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:46:37] <bchgys> Hi folks. Trying to do this: db.collection.update({phrase: phr}, {{$addToSet: {posts: fred._id}},{$inc: {count: 1}}},{upsert: true}); in mongo shell. Getting 'unexpected token {' error. Can I not do $addToSet and $inc in the same query, or is there something else that I'm missing?
[03:19:37] <UnNaturalHigh> I was wondering if anyone here can help me figure out how I might make it such that I have two fields index_name and date they together they both must be unique, but can individually be the same?
[03:20:01] <UnNaturalHigh> *can be the same?
[03:20:38] <lethjakman> why is mongo considered so fast?
[03:20:55] <lethjakman> I see a lot of hype about it, but I can't understand hwo it'd magically be faster than something like postgres?
[03:21:03] <lethjakman> is it the lack of foreign keys?
[03:21:07] <lethjakman> or something to that effect?
[03:38:16] <nodejs> hello
[03:39:09] <nodejs> i have a questian
[03:40:58] <nodejs> can i distribute mongodb binaries in my proprietary program?
[05:30:30] <FurqanZafarIQ> hello
[05:43:06] <FurqanZafarIQ> nodejs mongodb driver drops connection when idle
[06:06:43] <joannac> FurqanZafarIQ: even after turning on keepalive?
[06:07:41] <FurqanZafarIQ> no i didnt try keepalive. i am connecting to the db and storing a reference to it in a global variable
[06:07:50] <FurqanZafarIQ> every future request then uses this variable
[06:08:29] <FurqanZafarIQ> but when the server receives a request after 3 or more hours, the insert request fails silently or sometimes throws an error (unable to connect)
[07:36:32] <Faisal_> hello all
[07:36:53] <Faisal_> needed an advice from you guys
[07:37:37] <Faisal_> I have been using Mongodb with django and have been saving data in the form of embedded documents
[07:37:47] <Faisal_> But I am facing this issue
[07:37:49] <Faisal_> http://stackoverflow.com/questions/24866507/non-field-errors-expected-a-list-of-items-when-saving-data-in-django
[07:37:59] <Faisal_> Can anyone tell me what could be wrong ?
[07:49:04] <Faisal_> any suggestions ?
[07:49:08] <Faisal_> for me :(
[07:51:55] <lqez> Faisal_: If you could make a http://runnable.com/ for this problem, it's much better to examine together
[07:52:55] <lqez> sample is here : http://runnable.com/U1_O0Lim2HpDJ9XV/using-summernoteinplacewidget-with-bootstrap3-for-python-and-django
[07:55:43] <lqez> Faisal_: it was caused by https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/serializers.py#L524-543 but it's very complicated mixes = django + django-restframework + django-mongodb-engine
[07:55:50] <lqez> so it's very hard to reproduce same error
[07:57:46] <lqez> Additionally, the problem was not up to django-mongodb-engine, just related with django-restframework while serializing.
[07:57:56] <lqez> (IMHO)
[07:58:33] <Faisal_> ohh
[07:59:09] <Faisal_> lqueq_ means problem in my serializers ?
[07:59:48] <lqez> Or your request. I'm not sure. But the fact is : error message was from django-restframework
[07:59:52] <lqez> (that i mentioned above)
[08:00:51] <Faisal_> Alright
[08:04:52] <lqez> Faisal_: I think django-restframework cannot handle multiple data creation on POST.
[08:04:57] <lqez> ref: https://groups.google.com/forum/#!topic/django-rest-framework/uJA1kuUO9gc/discussion
[08:05:07] <lqez> and there is an extension for this : https://github.com/miki725/django-rest-framework-bulk
[08:05:51] <lqez> but I'm not mature at django-restframework :)
[08:06:18] <Faisal_> lquez_ thanks man :)
[08:39:49] <Faisal_> _
[08:50:26] <backSlasher> Anyone has any experience with database defragmentation using `compact`?
[08:56:07] <rspijker> backSlasher: just ask your question :)
[08:57:16] <backSlasher> rspijker, How can I tell if my DB needs compacting without baselining? I'm looking for an actual indicator of "whitespaces in your file"
[08:58:28] <kali> backSlasher: look at db.collection.stats()
[08:58:57] <backSlasher> kali, I know about the difference between the data size and file size, but that could be whitespaces at the end of the file , no?
[08:59:32] <rspijker> yes
[09:00:15] <backSlasher> so it's not a good indicator
[09:01:32] <kali> what about the paddingFacgtor ?
[09:01:57] <kali> with 2.6 and the default allocator being powerOf2 anyway, i'm not sure defragmenting is still that relevant
[09:02:31] <rspijker> there is no realistic way of checking *actual* fragmentation that I know of
[09:02:50] <rspijker> that is, checking an extent and determining the degree to which it is fragmented
[09:07:43] <backSlasher> I figured as much, but wanted to check anyway
[09:09:25] <rspijker> anyway, kali is fairly correct. With the powersOf2 especially, the need for defragmentation will be very minimal
[09:13:01] <Faisal_> @lquez_ I could see support for buld update here - http://www.django-rest-framework.org/api-guide/serializers#dealing-with-multiple-objects
[09:13:06] <Faisal_> not sure though
[09:13:22] <backSlasher> I have powerof2, and we thought so as well
[09:13:34] <backSlasher> but I had terrible update performance
[09:13:42] <backSlasher> and after compacting, the worked great
[09:14:49] <backSlasher> *the db
[09:14:58] <backSlasher> so I'm not so sure anymore
[09:17:50] <newbsduser> hello, i have a database named: HTTP. example data: http://pastebin.com/ykkV52qS ... how can i query group by count of HttpHost field..?
[09:17:59] <newbsduser> iam trying to do it on mongo shell
[09:18:40] <newbsduser> i want to see a list like: domainname.com-10, domainname2.com-9...
[09:30:14] <Gargoyle> Anyone use instance storage (not ebs) for mongo servers on AWS ?
[09:32:24] <rspijker> newbsduser: use aggregate
[09:33:06] <rspijker> db.collection.aggregate({$group:{“_id”:”HttpHost”, “count”:{$sum:1}}})
[09:33:20] <rspijker> that should be “$HttpHost"
[09:34:51] <newbsduser> db.HTTP.aggregate(
[09:34:51] <newbsduser> { $group : {
[09:34:51] <newbsduser> _id : "$HttpHost",
[09:34:51] <newbsduser> docsPerHttphost : { $sum : 1 },
[09:34:51] <newbsduser> viewsHttphost : { $sum : "$pageViews" }
[09:34:52] <newbsduser> }}
[09:34:54] <newbsduser> )
[09:34:58] <newbsduser> it works is that ok?
[09:35:38] <newbsduser> how can sort it
[09:35:44] <newbsduser> rspijker,
[09:36:41] <rspijker> aggregate({$group:{…}}, {$sort:{“$viewsHttphost”:-1}})
[09:36:54] <rspijker> would sort descending on viewsHttphost
[09:42:35] <mrfraggs> hoi, quick question: so im building a setup using mongo db and was wondering if it acceptable to use a replica set cluster type setup and failover to the "new" primary if the old one dies or would this undermine some ideas of mongodb ?
[09:46:46] <rspijker> mrfraggs: that sounds like exactly the point of replica sets...
[09:55:03] <lqez> Faisal_: my ID is lqez, not lquez_ :)
[09:55:12] <lqez> So I missed your query ;;
[09:55:42] <lqez> Faisal_: multiple object deserializer needs iterator-able object. So,
[09:56:05] <lqez> as you can see at Deserializing multiple objects for creation
[09:56:30] <lqez> It received python list object.
[09:56:48] <lqez> But in POST request, it will be treated as JSON string.
[09:57:51] <lqez> Faisal_: so, it must be parsed from json to python list first
[09:59:01] <lqez> But I don't know why we should parse it explicitly. :(
[09:59:23] <Derick> lqez: because the database doesn't speak json
[09:59:56] <lqez> Derick: yeap, but Faisal_ uses django-restframework and it serialize / deserialize objects itself.
[10:01:35] <Faisal_> yes I am using django rest framework to serialize / deserialize objects
[10:12:06] <lqez> Faisal_: how about to preprocess other_info before deserilizing it?
[10:12:13] <lqez> deserializing*
[10:14:11] <lqez> and checkout the type of request.DATA['other_info']
[10:14:25] <lqez> I think it should be list, not just string
[10:15:55] <Faisal_> preprocess as in ?
[10:40:47] <Faisal_> lquez_ type(request.DATA['other_info']) returns nothing :P
[10:41:29] <Faisal_> lquez type(request.DATA['other_info']) returns nothing :P
[10:42:04] <lqez> Faisal_: OMG
[10:42:29] <lqez> does it have any data in it?
[10:44:39] <Faisal_> I am sending it via post
[10:45:30] <lqez> Could you show me the contents of request.POST and request.DATA?
[10:46:28] <nfroidure> we've got potentially 1TB of images per year to store, would MongoDB grid fs suit for this task?
[11:04:16] <gameboy__> hi i have a question
[11:04:56] <gameboy__> i somebody now if (or when) there will be polish language for 'Text Search Languages '
[11:04:59] <gameboy__> ?
[11:15:25] <lqez> gameboy__: How about using https://translate.google.com/#ru/pl/Text%20Search%20Languages
[11:27:53] <gameboy__> lqez: i want to have polis full text search
[11:28:02] <lqez> aha
[11:28:06] <lqez> so sorry -
[11:30:13] <gameboy__> in documentation in Text Search Languages there is't polish and i cant find information when or if the polish language will be supported by mongo
[11:30:47] <gameboy__> mayby you know some way how to add another languages ?
[11:34:24] <cheeser> there's no open jira for it. you might consider filing one.
[11:34:32] <cheeser> https://jira.mongodb.org/
[11:40:51] <Derick> seems like the main issue would be stemming polish
[12:08:03] <talbott> hello mongo ers
[12:08:07] <talbott> quick q
[12:08:27] <talbott> i have a large-ish (18million docs and growing) on my SSD db..
[12:08:31] <talbott> that's getting kinda pricey
[12:08:40] <talbott> and i dont really need 99% of those docs at hand
[12:08:45] <talbott> so i want to archive them
[12:09:03] <talbott> they can go off to a slower mongo somewhere else
[12:09:16] <talbott> but i cant really see a standard way to do this. Am i missing somrthing in the docs?
[12:14:55] <Gargoyle> talbott: I don't think there's an internal option within mongodb
[12:15:32] <Gargoyle> You could create another host with HDD's, but your app would have to do the archiving.
[12:15:36] <rspijker> you ca do sharding on something datebase
[12:16:50] <rspijker> can do sharding on something datebased*
[12:24:44] <talbott> ah ok
[12:24:49] <talbott> maybe i should use something like this
[12:24:53] <talbott> https://github.com/bploetz/mongodb-archive
[12:25:22] <Gargoyle> rspijker: You could pick a shard key which is date based, but that will result in a lot of block shuffling by the sharding process.
[12:27:30] <Gargoyle> talbott: You could probably do something in the background using the oplog
[12:28:21] <Gargoyle> talbott: SSD = Capped collection or TTL. Then have a process which reads the oplog and writes to a normal collection on the archive server.
[12:30:11] <rspijker> Gargoyle: you can do pre-splitting/distribution
[12:30:36] <rspijker> as well as tagging to ensure that latest chunks go to one host and others to another host
[12:30:43] <Gargoyle> rspijker: Yes, but as time ticks by, blocks will have to be moved from one shard to the other.
[12:31:19] <rspijker> I’m not saying it’s maintenance free, but you can have the mongo do all of the work
[12:31:30] <rspijker> just update tagging boundaries once a month or so
[12:42:28] <Gargoyle> Are drivers backwards compatible? Eg, Is it best to update the language driver first, and then update the DB (2.4 to 2.6), or do the DB first and then update the driver?
[12:44:40] <rspijker> don’t think that’s something we can say in general. Should be clear from release notes most of the time
[12:48:54] <cheeser> the drivers tend to be fairly backward compatible. the java driver gets tested back to 2.2 and only dropped testing against 2.0 when 2.6 shipped.
[12:50:13] <Gargoyle> Thanks guys. Derick: Anything specific about the PHP driver that might be an issue you can think of off the top of your head?
[12:57:17] <talbott> ok thanks
[12:57:24] <talbott> i wonder if mongo will ever build in archiving
[12:57:36] <talbott> i wonder how most companies are doing it if they have HUGE amounts of data
[12:58:01] <cheeser> https://jira.mongodb.org/browse/SERVER-8482
[12:59:22] <cheeser> well, this is the active one: https://jira.mongodb.org/browse/SERVER-6895
[13:16:16] <Derick> Gargoyle: related to what?
[13:17:03] <Gargoyle> Derick: Compatible versions. Would I run into any issues using the latest driver with 2.4.x servers?
[13:17:09] <Derick> no
[13:17:24] <Derick> we support (I think) 2.0 and up
[13:17:32] <Derick> perhaps 2.2 and up, but pretty sure it.s 2.0 and up
[13:18:57] <noqqe> hi guys - has anyone expierence with the newrelic mongodb plugin?
[13:19:44] <noqqe> i can nowhere what it acutally monitors. the whole cluster like the mms agent? or more like every mongos itself
[13:20:07] <noqqe> s/nowhere/nowhere see/
[13:21:00] <noqqe> documentation is very .. "limited" in newrelic and even in the pypi package..
[13:25:15] <lqez> Faisal: I sent a query message.
[13:25:16] <mrfraggs> rspijker: cool, and the documentation mostly talks about 3 nodes with 1 arbiter but a 2 node setup would work perfectly fine as well from what i see and have tested. or is it a recommended "need" to use 3 ?
[13:25:31] <rspijker> mrfraggs: you need 3
[13:25:35] <lqez> You wrapped up an array with another array. It may cause problem.
[13:25:41] <rspijker> one of them can be an arbiter, but you need 3
[13:26:21] <rspijker> due to how the voting works
[13:26:44] <FurqanZafarIQ> is it safe to terminate a nodejs app that inserts records to a mongodb? will all the records be flushed from memory to disk?
[13:27:32] <rocahenry> hello
[13:28:06] <mrfraggs> ok so in a construct of having 2 boxes and from the understanding i have the arbiter is not a "full" node it would make sense to run 4 nodes? 1 primary & 1 arbiter on one box and 1 secondary & 1 arbiter on one box?
[13:30:22] <mrfraggs> ok just reading up on the doc ... the doc kinda killed all my assumptions *OH NOES*
[13:30:23] <kali> no !
[13:30:46] <Derick> mrfraggs: that's why you read the docs first :-)
[13:30:51] <Derick> mrfraggs: you need an odd numbre of nodes
[13:30:54] <Derick> each on a physical box
[13:31:02] <Derick> so two data nodes, and one arbiter
[13:31:22] <Derick> you can't really say "a primary" or "a secondary", as they might switch roles at any point
[13:31:37] <rocahenry> I want to export individual pictures from mongodb(bson)
[13:32:09] <mrfraggs> correct - ill need to read into it more ;)
[13:35:16] <og01> hi, when i use a keyFile as described here: http://docs.mongodb.org/manual/tutorial/generate-key-file/ my mongod process does not start and exits with 1, no other output and no logs
[13:36:29] <og01> everything works fine if i dont specify a keyFile
[13:39:23] <og01> any ideas?
[13:39:41] <og01> db version v2.6.0
[13:40:48] <rspijker> are you using a cfg file or a command line switch
[13:40:53] <rspijker> og01: ^^
[13:41:24] <og01> rspijker: i've tried both, with the same result
[13:41:53] <rspijker> pastebin your cfg?
[13:42:04] <og01> if i dont mention the --keyFile or the security.keyfile option then it starts fine
[13:42:11] <og01> rspijker: sure one moment
[13:42:46] <rspijker> my guess would be permission issue on the keyfile...
[13:46:18] <og01> rspijker: https://pastee.org/tv6cs
[13:46:26] <og01> rspijker: i checked that, i'll update the paste
[13:47:29] <og01> rspijker: ok i thought i had anyway... one sec
[13:47:45] <og01> still nothing, i'll send a new paste
[13:48:05] <og01> https://pastee.org/34ers
[13:48:17] <og01> can see 600 on mongod-keyfile
[13:48:39] <rspijker> SELinux?
[13:48:44] <og01> no
[13:49:08] <rspijker> can you test after chmod a+r mongod-keyfile ?
[13:49:36] <og01> nope no effect
[13:50:42] <og01> just to double check, the x509 auth mode needs to be recompiled with SSL support, but keyFile doesnt?
[13:52:01] <rspijker> don’t know for sure about x509, but keyfile definitely doesn't
[13:52:16] <og01> ok, just checking, this was my understanding too
[13:53:14] <rspijker> and if you remove the security part from your cfg and leave the rest exactly as-is, it works?
[13:53:45] <og01> yes
[13:54:18] <og01> (And just confirmed again by testing)
[13:54:40] <rspijker> that’s really weird… :/
[13:55:06] <og01> for reference im using the 10gen package pinned to 2.6.0
[13:55:23] <og01> i might try upping to 2.6.1 or so
[13:55:34] <rspijker> “the 10gen packaged pinned to 10.6”?
[13:55:44] <rspijker> 2.6*
[13:55:54] <rspijker> there should not be any 10gen 2.6 packages
[13:56:02] <rspijker> should have been renamed to mongodb-org
[13:56:12] <rspijker> or something like that
[13:56:12] <og01> my mistake, yes that package
[13:56:17] <rspijker> ok :)
[13:56:48] <og01> ii mongodb-org 2.6.0
[13:56:51] <og01> ^ confirmed
[13:57:08] <rspijker> well, I’m at a loss...
[13:57:24] <rspijker> if you run it from cmd line, as root and the file is owned by root with 600 perms
[13:57:27] <rspijker> it should work
[13:57:42] <rspijker> also weird that it’s not logging anything....
[13:57:51] <rspijker> can you check your system log?
[13:57:54] <og01> i've copied and pasted the commands as i've typed them, im also at a loss
[13:58:04] <rspijker> which linux flavour is this?
[13:58:17] <og01> rspijker: ubuntu in this case
[13:58:24] <og01> err
[13:58:39] <og01> whichever version, can find out if you like, one of the LTS ones
[13:58:47] <Derick> cat /etc/motd
[13:59:02] <og01> Ubuntu 12.04.2 LTS
[13:59:08] <rspijker> and mongod --version gives?
[13:59:22] <og01> db version v2.6.0
[13:59:25] <rspijker> just to make sure the packaged thing is actually the binary being run
[13:59:58] <rspijker> anything in the system log?
[14:00:34] <og01> only that mongod failed in the init process (while i was testing with /etc/init.d/mongod restart)
[14:01:22] <og01> (ie nothing interesting)
[14:01:33] <rspijker> and it’s not currently running by accident?
[14:02:09] <og01> i did check that, already, but yes confirmed again, not running currently
[14:02:49] <og01> there is no reason i have to stick with 2.6.0, shall i try moving to 2.6.somethingelse
[14:03:08] <rspijker> yeah, that might be a good idea
[14:03:25] <rspijker> give it a go and let me know if it works :)
[14:10:30] <og01> rspijker: still no luck on 2.6.1
[14:10:45] <og01> rspijker: it'll see if i can strace it
[14:11:34] <rspijker> that would have been my next suggestion
[14:14:33] <G808> hello
[14:17:53] <rspijker> hi
[14:19:33] <og01> rspijker: its working now, but i dont know why
[14:19:56] <rspijker> MAGIC
[14:20:07] <og01> rspijker: every time, its just working....
[14:20:19] <og01> rspijker: i straced it 3 times, 3rd time it started working
[14:20:28] <og01> afaik i didnt change anything
[14:21:09] <og01> it was exiting after trying to write to the logfile
[14:21:25] <rspijker> that’s very strange…
[14:21:30] <og01> (in the previous strace attempts)
[14:21:33] <rspijker> sure
[14:22:20] <og01> well I can't make it fail now, its working...
[14:25:38] <og01> rspijker: thanks for your help
[14:25:44] <og01> rspijker: still a mystery
[14:25:57] <rspijker> didn’t really do very much :)
[14:26:02] <rspijker> glad to see it’s working now
[14:45:56] <tadasZ> hi everyone, guys, i'm little confused, all articles i read i see schemas like { params: [{ name: "Brand", value: "Brandisimo" }, {name: "Height", value:"10 meters"}] }, does that mean that for example in brands collection _id of document is like "Brandisimo" or for heights _id:"10 meters" ?
[14:49:58] <ckozak> I'm having a spot of trouble with mongodb in java. Is there anything like BSON.addDecodingHook for keys? I'm trying to store some json data, unfortunately much of it contains periods in keys.
[15:21:07] <rspijker> tadasZ: no. The _id is generated by mongo if it’s not specified explicitly
[15:23:32] <tadasZ> rspijker: i know that, but i mean imagine i have 3 collections - brands, heights and products, and products has to have dynamic params like in example, but in all artiles i see people embeding like {"name": "Brand", value:"Brand name"} shouldn't it be {"name": "Brand", value:ObjectId(...) } ?
[15:24:38] <rspijker> no… mongo is not sql… It’s, by nature, not relational
[15:26:53] <rspijker> you should keep that in mind already when you are defining your schemas
[15:30:36] <FurqanZafarIQ> is it safe to terminate a nodejs app that inserts records to a mongodb? will all the records be flushed from memory to disk?
[15:53:18] <tadasZ> rspijker: i understand that, but do you say that everytime i want to set parameters to product i need to enter brand name by hand ?
[15:54:21] <rspijker> tadasZ: as opposed to…. entering the ObjectId by hand?
[15:54:28] <rspijker> maybe I’m not understanding your usecase here
[15:55:48] <tadasZ> sec i'll try to describe a usecase in more detail
[15:59:30] <Nodex> tadasZ : the general rule of thumb is the path of least resistance should win, providing of course that's good for your app as an entirity
[16:19:31] <uehtesham90> im export my mongodb collections to a csv file....is there a way i can rename the fields into a more readable format like username instead of _id.username
[17:17:18] <og01> how can i get secondaries to accept read requests?
[17:17:23] <og01> (and write requests)
[17:17:37] <og01> from the cli i can type rs.setSlaveOk()
[17:17:44] <og01> but no such command exists in my driver
[17:26:03] <og01> shouldnt reads be forwarded to the primary?
[17:26:09] <og01> when executed on the secondary?
[17:32:15] <uf6667> hello
[17:32:20] <uf6667> can someone have a look at this please? http://pastie.org/9412706
[17:33:29] <uf6667> I always get {amount: 0, total: 0}
[17:34:38] <uf6667> anybody there?
[17:54:54] <uf6667> anybody here who can help me with aggregation please?
[18:30:40] <revoohc> in MongoDB 2.2, is there anyway to find out where a specific long running connection is coming from? I.e in my log i have conn1234, I want to find out where the client is.
[19:05:03] <Chaos_zero> trying to save some bandwidth, suppose I make a query for a document which contains, among other fields, one rather large array. Can my projection start to only return the length of the array but not its contents?
[19:41:23] <jonyfive> hello
[19:41:49] <jonyfive> does anyone know, do you have to register/pay for a course befeore you can take a free online course from university.mongodb.com
[19:42:06] <jonyfive> argh, that came out wrong
[19:42:38] <jonyfive> on this page, https://www.mongodb.com/products/training/certification ...under "Resources" it says "Free online courses from MongoDB University"
[19:43:00] <jonyfive> but i've signed up for an account there and it seems i only see options to pay for an exam
[19:43:48] <jonyfive> sorry i see now, only some courses are free
[19:44:24] <jonyfive> guess i just needed to talk that one out ;)
[19:59:26] <huleo> hi
[20:00:32] <huleo> I'm wondering if I'm trying to do something impossible. Array of objects inside document, "places" let's call them, each object in this array containing user id (which user) and number of visits.
[20:01:00] <huleo> now, just to add new user to this array with "1" as number of visits - easy
[20:01:24] <huleo> and to get visits to increase by 1 - easy as well
[20:02:03] <huleo> but is this behavior possible: create new doc in this array /if/ one with specified "user_id" does not exist...and if it exists, just increase number of visits by one
[20:02:10] <huleo> of course dreaming of one-query solution
[20:02:49] <huleo> place: { name: "Park", visitors: [ { user_id: String, noOfVisits: Number } ] }
[20:39:28] <huleo> err, seconds...this still breaks my brain
[20:40:10] <huleo> place: { name: "Park", visitors: [ { user_id: String, noOfVisits: Number } ] }
[20:40:47] <huleo> updating it nicely, so in "visitors" array we always have only one doc with specific user_id, and noOfVisits increases by 1 with each update
[20:41:08] <huleo> basically, if it exists: create, if it does not: just increase one field by 1
[21:34:30] <s2013> can someone guide me to importing db into mongo? im new to mongo
[21:35:01] <s2013> we exported a db from parse.com its a 1.3gig db with bunch of json files (each tables). i installed mongodb do i use mongoimport?
[23:45:17] <s2013> exception:JSONArray file too large .. i used mongoimport -d db -c collection file.json --jsonArray
[23:45:54] <s2013> how do i import a large db into mongo? it has about ad ozen collections/tables