PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 9th of June, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:37:11] <jopenhagen> Hello, i'm having an issue with the tutorial "write-a-tumblelog-application-with-flask-mongoengine" any familiar?
[00:40:35] <StephenLynx> that is not very related to mongo.
[00:40:41] <StephenLynx> its more related to flask and mongoengine.
[00:41:06] <StephenLynx> because those are the tools you are working with. one of them just happens to work with mongo.
[00:41:56] <jopenhagen> c, thanks anyway
[00:42:57] <joannac> jopenhagen: you should at least say which part you're stuck on
[00:47:14] <jopenhagen> when defining an EmbeddedDocument I kept getting an error that it wasn't registered. I just moved it above the Document it was being Embedded into and the problem was solved.
[01:39:50] <MacWinner> when using an ODM's populate mechanism like mongoose, do they avoid a 2nd roundtrip to the database somehow to populate a subdocument? or are they just abstracting the 2 separate connections?
[01:41:43] <cheeser> there is no real concept of a "subdocument" to the server. one document is fetched it just happens to contain this "subdocument"
[02:30:17] <pyios> how do I search for the document inserted latese?
[02:30:51] <pyios> how do I search for the document inserted latest?
[02:34:35] <cheeser> do you use ObjectId for your _id?
[02:40:36] <tejasmanohar> hey how do i mongodump a remote server
[02:40:42] <tejasmanohar> i have a connection URI
[02:40:50] <tejasmanohar> as well as username, password, host, etc. extracted from it
[02:41:20] <tejasmanohar> mongodump --host ... --port ... --username ... --password ... doesnt seem to do
[02:42:19] <joannac> tejasmanohar: what's the output?
[02:42:42] <tejasmanohar> 2015-06-08T22:40:47.866-0400 Failed: error connecting to db server: auth failed but this is the same stuff i used to connect in my code + gui
[02:43:01] <tejasmanohar> joannac: ^
[02:43:10] <joannac> tejasmanohar: --authenticationDatabase
[02:43:12] <tejasmanohar> can i put a connection URI instead
[02:43:14] <tejasmanohar> hm
[02:43:16] <tejasmanohar> ok
[02:44:40] <tejasmanohar> mongodump --authenticationDatabase <dbname> --host <ip> --port <port> --username <user> --password <pass> joannac ?
[02:44:44] <tejasmanohar> 2015-06-08T22:42:25.239-0400 Failed: error getting database names: not authorized on admin to execute command { listDatabases: 1 }
[02:44:47] <tejasmanohar> eek
[02:45:14] <joannac> tejasmanohar: your user doesn't have the right permissions
[02:45:27] <tejasmanohar> joannac: oh, to dump?
[02:45:34] <tejasmanohar> is there a separate perm to jump than jsut connect?
[02:45:40] <tejasmanohar> i can edit stuff
[02:45:41] <tejasmanohar> delete documents
[02:45:43] <joannac> um, yes
[02:45:44] <tejasmanohar> etc
[02:45:52] <tejasmanohar> joannac: i can view all documents in the cli
[02:45:54] <tejasmanohar> just not dump
[02:46:24] <tejasmanohar> mongodump just give sme a file of all the data right?
[02:46:25] <joannac> http://docs.mongodb.org/master/reference/built-in-roles/#backup
[02:46:34] <tejasmanohar> hm can i just do this if i have ssh permissions in the server?
[02:47:02] <joannac> ...no
[02:47:10] <joannac> this is a mongodb user roles issue
[02:47:25] <joannac> your server access is irrelevant
[02:47:48] <tejasmanohar> oh ok but theres no where i can back it up in the filesystem itself?
[02:47:48] <joannac> you have a user for the mongod instance, right? what roles does it have?
[02:48:01] <tejasmanohar> joannac: i didnt create / setup this server, let me look
[02:48:16] <tejasmanohar> joannac: i just have this user role hm
[02:48:45] <joannac> if you can take filesystem snapshots, you can try http://docs.mongodb.org/manual/tutorial/backup-with-filesystem-snapshots/
[02:48:58] <tejasmanohar> ok
[02:50:40] <tejasmanohar> joannac: is there also a different perm for exporting all mongo data
[02:50:42] <tejasmanohar> mongoexport
[02:51:34] <joannac> tejasmanohar: http://docs.mongodb.org/manual/reference/program/mongoexport/#required-access
[02:54:46] <tejasmanohar> lol so i have the permissions to mongoexport collectino by collection but not all at once via mongodump
[02:54:49] <tejasmanohar> that doesnt sound right joannac ?
[02:54:53] <tejasmanohar> mongoexport works
[02:54:59] <tejasmanohar> i just have to specify collection by collection
[02:55:11] <joannac> sure it does
[02:55:36] <joannac> exporting collection by collection means you're providing the list of collections and databases
[02:56:44] <tejasmanohar> alright sure
[02:56:48] <tejasmanohar> i got it anyways like this
[02:56:58] <tejasmanohar> just kinda weird, since i can effectively do the same thing via mongoexport
[02:57:03] <tejasmanohar> w/o the permissions to do mongodump
[02:57:05] <tejasmanohar> joannac:
[02:58:53] <joannac> except they're 2 different commands
[02:59:45] <joannac> I suspect mongodump, specifying a database, would have worked
[03:00:13] <tejasmanohar> oh
[03:00:15] <joannac> in any case
[03:00:29] <joannac> you're comparing mongodump of everything, vs mongoexport of a single collection
[03:00:50] <joannac> try mongodumping a single collection for an actual comparison?
[03:01:03] <tejasmanohar> oh
[03:01:07] <tejasmanohar> mongodump --authenticationDatabase <db> --host <ip> --port <port> --username <user> --password <pass>
[03:01:10] <tejasmanohar> i did this
[03:01:23] <tejasmanohar> oh u want me to try a specific collection
[03:01:26] <joannac> yeah, there's no database there
[03:01:30] <tejasmanohar> <dB>
[03:01:31] <tejasmanohar> <db>
[03:01:34] <tejasmanohar> that's the database ? joannac
[03:01:45] <joannac> no, that's the database you need to authenticate against
[03:01:58] <tejasmanohar> oh -d
[03:02:49] <tejasmanohar> ok that worked
[03:02:51] <tejasmanohar> thx
[03:06:44] <tejasmanohar> um
[03:06:51] <tejasmanohar> after mongodumping i can no longer acccess my db
[03:06:53] <tejasmanohar> joannac:
[03:07:24] <tejasmanohar> in gui or code
[03:08:49] <tejasmanohar> werid
[03:08:57] <tejasmanohar> after mongodump i can no longer access my database
[03:09:01] <tejasmanohar> via the user used
[03:10:04] <joannac> tejasmanohar: pastebin connecting via the mongo shell
[03:11:30] <tejasmanohar> i dont usuaully use shell but mongo -h ...:.... -u ... -p ... -d ... should work right? joannac
[03:12:31] <joannac> is that what you tried before?
[03:12:47] <tejasmanohar> joannac: i use gui to connect
[03:12:56] <tejasmanohar> but thats the same syntax i used for mongodump yeah
[03:13:02] <tejasmanohar> just w/ mongodump not mongo
[03:13:10] <joannac> and the mongodump worked?
[03:13:31] <tejasmanohar> yes
[03:13:33] <tejasmanohar> i have all the files
[03:13:35] <joannac> then sure
[03:13:39] <tejasmanohar> errs one sec
[03:14:29] <tejasmanohar> im in the shell joannac
[03:14:32] <tejasmanohar> weird
[03:14:42] <tejasmanohar> but gui stopped hm
[03:15:52] <joannac> sounds like a gui problem
[03:17:13] <tejasmanohar> yeah
[03:17:15] <tejasmanohar> it is
[03:32:49] <pyios> cheeser:yes ,I have a _id
[03:33:26] <cheeser> you have to have one. the question was, "do you use ObjectId for your _id?"
[03:34:17] <tejasmanohar> joannac: does mongodump delete existing data?
[03:34:22] <cheeser> no
[03:34:45] <cheeser> at least, not from the database. it probably overwrites existing output files.
[03:35:05] <tejasmanohar> okok got it
[03:35:08] <tejasmanohar> thx
[04:52:52] <pyios> ~cheeser:I use a item NO. and the current time as _id
[04:53:11] <pyios> _id:{itemno ,datetime}
[05:00:18] <Boomtime> pyios: that is slightly risky, because {itemno:1,datetime:1} and {datetime:1,itemno:1} are unique
[05:01:47] <pyios> Boomtime:so where is the risky ?
[05:02:01] <Boomtime> as i said
[05:02:09] <Boomtime> "because {itemno:1,datetime:1} and {datetime:1,itemno:1} are unique"
[05:03:07] <pyios> Boomtime:I do not understand what you mean , could ypu please explain more?
[05:03:21] <Boomtime> you use a sub-document as an _id right?
[05:03:48] <pyios> yeah
[05:04:06] <Boomtime> the two subdocuments i just quoted are not equal
[05:04:20] <Boomtime> {a:1,b:1} is NOT equal to {b:1,a:1}
[05:04:32] <pyios> do you mean it will exist two document for the same value?
[05:04:37] <Boomtime> your _id is sensitive to the construction order of the sub-document
[05:05:13] <Boomtime> no, the values are different, if you think that {a:1,b:1} is the "same" as {b:1,a:1} then you have a problem
[05:05:53] <Boomtime> these are not the same, the uniqueness of a field applies at the level it is described at - ANY change below that point is a difference
[05:05:56] <pyios> Boomtime:in my case , {a:1,b:1} is equal to {b:1,a:1}
[05:06:08] <Boomtime> then you cannot use that as a _id
[05:06:33] <Boomtime> because the uniqueness constraint is applied at the _id level, not the field level inside the subdocument
[05:07:13] <pyios> ok
[05:07:25] <pyios> i know
[05:08:31] <pyios> but every time I insert the new document ,I have specified the subdocument
[05:09:06] <pyios> I have specify it is {a:1 b:1},a is always at first
[05:09:24] <Boomtime> and you always use the same driver?
[05:09:37] <Boomtime> because you are also at the mercy of whatever constructs that subdocument
[05:10:22] <Boomtime> like i said, it's risky because you're not in perfect control of the situation - you are depending on behavior that is not guaranteed
[05:11:12] <pyios> do you mean if I change to use other driver ,it will be diff ,although I have specified the subdocument
[05:12:34] <Boomtime> i don't know, JSON does not dictate preservation of order
[05:12:54] <Boomtime> this is what i mean, you are depending on behavior that is not guaranteed
[05:13:06] <Boomtime> will it be different? who knows?
[05:13:58] <pyios> Boomtime:if I use {a:1,b:2} ,you mean it would be inserted as {b:1,a:1} ?
[05:14:15] <Boomtime> who knows? maybe, why not?
[05:14:24] <Boomtime> there is no rule
[05:14:31] <Boomtime> there is no requirement to preserve order
[05:15:02] <pyios> Boomtime:you mean json does not guarantee the order ?
[05:15:08] <Boomtime> right!
[05:15:19] <pyios> oh
[05:16:00] <Boomtime> yeah, it sucks eh? nothing mongodb can do about it but warn you not to depend on the preservation of field order
[05:16:31] <Boomtime> unfortunately that means that _id cannot be given subdocuments without risk
[05:16:54] <Boomtime> maybe it will work, maybe it will work for awhile, maybe it will break later, who knows?
[05:17:36] <pyios> Boomtime ,thanks for remind !
[05:18:08] <pyios> Boomtime ,thanks for warn me for this !
[05:18:53] <pyios> my english is not good :(
[05:25:49] <Boomtime> no worris, cheers
[05:25:57] <Boomtime> *sigh*
[05:53:43] <pyios> does mongodb have foreign key
[05:54:10] <pyios> could ireference id of other collections ?
[08:51:48] <vakz> Hi. I was wondering if anyone is aware of any known issues when installing mongodb from the debian wheezy repo on jessie?
[08:56:18] <Garito> hi!
[08:56:57] <Garito> I'm using mongoengine with flask but when I try to delete a document the pymongo's collection.delete returns None instead of a dict
[08:57:22] <Garito> I was asking to the mongoengine people and they think perhaps is a matter of versions or something
[08:57:41] <Garito> did you know if pymongo 2.8 has issues with this topic
[08:57:42] <Garito> ?
[08:57:49] <Garito> if not: could be a pymongo bug?
[08:58:05] <Garito> thanks1
[08:58:06] <Garito> !
[09:03:31] <sterichards> How can I update a collection setting a field to a value where that field is null?
[09:23:26] <sterichards> Mongodb doesn’t seem to support an UPDATE documents WHERE condition? :S
[09:27:09] <Folkol> sterichards: http://docs.mongodb.org/manual/reference/method/db.collection.update/#multi-parameter
[09:27:23] <Folkol> A normal update is "UPDATE doc WHERE"...
[09:27:45] <cofeineSunshine> multi=true
[09:27:59] <sterichards> I’ve constructed: db.object.product.update( {“vendorID”: null }, { $set : { “vendorID” : "54d3a1444a63c3545f8b456b"}}); but that returns SyntaxError: Unexpected token ILLEGAL
[09:28:40] <cofeineSunshine> sterichards: http://docs.mongodb.org/manual/reference/method/db.collection.update/#multi-parameter
[09:29:56] <einyx> hello
[09:30:08] <einyx> is anynone having trouble using the upstart sciprt with mongo 3.0.3 on ubuntu 14.04?
[09:30:21] <einyx> status and stop just don't work properly :\
[09:37:12] <Folkol> sterichards: Replace your curly quotes with straight ones.
[09:37:34] <Folkol> Also, here is an example of multi-update: http://pastebin.com/dhk0PiNb
[10:19:43] <Siamaster> For a replicaset, I would need 3 different servers right?
[10:19:58] <Siamaster> even for the arbiter?`
[10:20:12] <Siamaster> I mean even one server for the arbiter*
[10:21:37] <rasputnik> Siamaster: odd numbers are good, yeah - 1,3 or 5
[10:21:51] <Siamaster> how would it work with 1?
[10:22:34] <rasputnik> Siamaster: it'd be a 1-node replica set. still valid (just not very useful). but you can only shard across replica sets, so sometimes they can be useful.
[10:23:22] <Siamaster> hmm
[10:23:27] <Siamaster> I think I'm gonna go with 3
[10:23:44] <Siamaster> but the arbiter server could be a really bad computer right?
[10:23:58] <Siamaster> i mean the cheapest I can get on aws
[10:25:59] <rasputnik> Siamaster: yeah that's how they sell it. it needs to be up if you want to survive a 'real' server failure though.
[11:17:46] <joannac> rasputnik: you can shard with standalones
[11:18:02] <joannac> (not that I would recommend it)
[11:18:17] <bogn> Hi all, can anybody confirm that it is really not possible to let MMS provision AWS machines with PIOPS? It seems they only provision general purpose volumes for you.
[11:21:14] <joannac> bogn: erm, yes you can
[11:21:41] <bogn> when I use the "Create Deployment" or "let MMS provision" wizard
[11:22:02] <bogn> and configuring the volume size, they explicitly state that they'll create general purpose volumes
[11:22:10] <rasputnik> @joannac oh, is that a new feature? think we're on 2.6.3 or something and it refused to add a single node shard
[11:22:12] <bogn> and that was also the result, when I ran it last time
[11:23:28] <joannac> bogn: oh, not in the wizard. go to advanced deployment
[11:24:13] <joannac> rasputnik: i don't think so?
[11:24:19] <Siamaster> omg bogn thank you ! I didn't know about MMS
[11:24:45] <rasputnik> joannac: oh ok, we're probably doing it wrong then :/
[11:25:15] <joannac> rasputnik: :/
[11:25:49] <bogn> joannac: are you talking about "Provision EC2 Machine"?
[11:26:37] <joannac> bogn: correct
[11:26:45] <bogn> ok, thanks
[11:27:02] <joannac> bogn: I'm going to close the ticket you just opened, unless you have more questions?
[11:27:42] <Siamaster> so MMS is like elastic beanstalk for MongoDB?
[11:29:31] <Siamaster> $50 / additional server / month or $600 / additional server / year prepaid
[11:29:32] <Siamaster> lol?
[11:29:41] <Siamaster> why prepay for same price
[11:29:55] <bogn> joannac: I'm not getting how to create multiple volumes for the journal and logs
[11:30:24] <bogn> I mean single separate volumes for journal and logs
[11:30:40] <bogn> as outlined here: https://aws.amazon.com/marketplace/pp/B00CO7AVMY
[11:31:47] <Siamaster> but in the end, isn't mms way more expensive than manually handling stuff ?
[11:32:42] <joannac> bogn: https://jira.mongodb.org/browse/MMS-2166
[11:32:51] <rasputnik> Siamaster: depends what your time is worth i guess - you get monitoring and full sharded backup for that too
[11:33:18] <joannac> bogn: vote / comment :)
[11:33:38] <joannac> rasputnik: backup is not included
[11:34:04] <rasputnik> joannac: maybe we're talking about 2 different tiers then
[11:34:42] <Siamaster> what do you mean by sharded backup?
[11:34:43] <joannac> rasputnik: MMS backup is not included with any tier. Unless you're talking about tiny backups
[11:34:55] <joannac> (in which case it's free :P)
[11:34:56] <Siamaster> And when does it start sharding?
[11:35:06] <rasputnik> joannac: i don't know what a tiny backup is sorry
[11:35:18] <rasputnik> ah hang on we have on-prem. sorry.
[11:35:26] <joannac> rasputnik: oh. well then
[11:35:38] <bogn> joannac: what is a recommended size for the root volume not including the data
[11:35:59] <rasputnik> ENOCOFFEE
[11:36:26] <joannac> bogn: shrug. size of binaries?
[11:36:39] <Siamaster> when they say $50 per /additional server, do they mean inclusive the cost you get from aws?
[11:36:52] <Siamaster> or you pay $50 just for the mms service?
[11:37:03] <joannac> rasputnik: if you're paying enough to get OpsManager you're not going to be quibbling over $50/server
[11:37:14] <joannac> Siamaster: MMS only.
[11:37:45] <Siamaster> hmm ok
[11:37:47] <rasputnik> joannac: client site, not me. migration path to opsmanager is going to be a hoot.
[11:38:12] <joannac> rasputnik: enjoy
[11:39:00] <Siamaster> I guess if Ill ever need more than 8 servers for my startup, I will not care about additional $50/server
[11:39:38] <Siamaster> 1337 lets celebrate
[11:40:12] <Siamaster> ofc.. if you live in sweden..
[11:40:30] <Siamaster> and if you are a geek
[11:46:14] <bogn> joanna.c I resolved the ticket
[11:49:15] <bogn> regarding my question on root volume size, here's the info from the MMS docs:
[11:49:15] <bogn> Select a Root Volume Size (GiB) large enough for the deployment’s needs. We recommend a root volume of at least 25 GB. The root volume stores the operating system, the downloaded MongoDB versions, and the Automation Agent log files.
[11:51:55] <Siamaster> does that mean that MMS will use only one EC2 will several EBS?
[11:52:15] <Siamaster> with*
[11:52:48] <Siamaster> I'm trying to sign up but it says Company must not be blank.
[11:52:54] <Siamaster> there is no company filed
[11:53:06] <Siamaster> field*
[11:54:13] <joannac> Siamaster: screenshot?
[11:54:15] <bogn> Siamaster: no, I'm just configuring PIOPS volumes because of better performance
[11:54:31] <Siamaster> joannac how do you want it sent?
[11:55:48] <joannac> Siamaster: just on imgur or something like that?
[11:56:01] <joannac> Siamaster: if that's not feasible, I can IM you my email
[11:57:16] <Siamaster> ill post on imgur
[11:57:26] <Siamaster> joannac are you in the mms team?
[11:58:13] <Siamaster> http://i.imgur.com/0Fz1chT.jpg
[11:58:57] <Siamaster> :D
[11:59:32] <cheeser> so it wants a company but has no box for it. nice.
[11:59:51] <cheeser> i'll prod the team in a few minutes about that. no one's in at the moment.
[12:00:23] <joannac> Siamaster: weird.
[12:00:30] <Siamaster> you already lost a customer!! <-- :'(
[12:00:36] <joannac> I can't reproduce it... it takes me to a new page
[12:00:48] <Siamaster> ok Ill just try again
[12:04:17] <Siamaster> joannac, what I did was I first entered an invalid password then corrected it
[12:04:30] <Siamaster> try to see if that will reproduce the bug
[12:05:25] <Siamaster> Can I trust what MMS does for me is cost-effective?
[12:05:37] <joannac> Siamaster: just tried. still can't reproduce
[12:05:44] <Siamaster> hmm ok
[12:05:48] <joannac> Siamaster: also, I don't understand the question
[12:06:09] <Siamaster> it is autoscaling for me right?
[12:06:26] <joannac> No
[12:07:23] <Siamaster> If I let MMS provision my EC2 instances
[12:07:44] <Siamaster> How do I know it wont shoot a duck with bazooka?
[12:08:05] <joannac> if that's a metaphor, i don't get it
[12:09:11] <Siamaster> how do I know it won't provision too expensive instances for what I actually need?
[12:09:30] <joannac> because you have to provision things yourself
[12:09:51] <joannac> i.e. "MMS, please provision a m3.large"
[12:10:26] <Siamaster> Then why this question Do you want MMS to provision EC2 instances to host your MongoDB deployment or would you like to provision the EC2 instances yourself?
[12:11:42] <joannac> Siamaster: that's the difference between giving MMS your AWS keys, or provisioning instances yourself and telling MMS about them
[12:11:59] <Siamaster> ah got it, thanks!
[12:27:56] <Siamaster> I can't use t2 instances ?
[12:31:06] <bogn> how do I specify logpath in MMS? This setting is not available in the dropdown of the cluster. Should I really do that per cluster? For the data path there is the prefix, is there no such thing for the log?
[12:34:16] <cheeser> i don't think you can
[12:37:16] <bogn> also if I have the journal on another volume, I need to mount it to e.g. /data/EBc01_sh_0_103/journal right?
[12:37:39] <bogn> because the journal path is not configurable at all
[12:56:32] <bogn> it's also not so nice, because you can't automate that because of the incremented dir count part under /data
[12:58:26] <bogn> or more precisely it's made harder.
[13:01:16] <Siamaster> So I have to launch atleast a m2 instance for MMS
[13:01:34] <Siamaster> in my case a m2.medium costs $0.083 per Hour
[13:02:29] <Siamaster> but will I get charged every hour for that instance because MMS is making health checks to my instance?
[13:03:49] <cheeser> well, the agents typically live on your hosts so as long as the host is up, the agents will do their thing
[13:04:04] <cheeser> if you power it down, then obviously nothing will happen
[13:04:41] <Siamaster> hmm ok
[13:05:10] <Siamaster> perhaps MMS isn't suitable for an alpha version that can be hosted on a single t2.micro instance
[13:10:58] <cheeser> yeah. probably not super great if you're just building a POC
[13:11:16] <cheeser> it's doable but it'll definitely add some steps as you try to pivot around shifting requirements.
[13:12:25] <joannac> a t2.micro has 1GB of memory. I'm not sure mongodb will run at all well on that
[13:14:32] <leporello> Hi. I'm using mongo-express as a db manager, but it doesn't show me all databases.
[13:14:37] <leporello> How can I fix it?
[13:14:46] <cheeser> which ones aren't you seeing?
[13:15:20] <leporello> cheeser, i see admin and firstApp database
[13:15:32] <leporello> but can't see secondApp, though show dbs show ist
[13:15:42] <leporello> *shows it
[13:17:06] <leporello> Access system is quite complicated :)
[13:18:26] <cheeser> might be an auth issue
[13:18:48] <cheeser> denniskuczynski: ping
[13:19:23] <cheeser> denniskuczynski: http://i.imgur.com/0Fz1chT.jpg requiring a company name without providing a company field.
[13:19:25] <leporello> cheeser, db.getUsers() returns me empty array for all dbs except admin. There i gave all roles to admin.
[13:20:02] <cheeser> hrm. dunno then. might be a mongo-express bug. i've never used it.
[13:22:31] <jr3> https://gist.github.com/anonymous/b03e79d6f43feddd7c4c
[13:22:52] <jr3> ^ is this a correct the way I'm getting to the native drive
[13:22:53] <jr3> r
[13:24:28] <StephenLynx> line 18
[13:24:30] <StephenLynx> thats wrong.
[13:24:43] <StephenLynx> find returns a cursor, update is a function of collection.
[13:24:49] <StephenLynx> update works like this:
[13:24:59] <StephenLynx> update({query block},{update block})
[13:25:05] <StephenLynx> you don't find beforehand.
[13:25:33] <StephenLynx> just remove the find function and put its parameter in the first argument of update.
[13:25:41] <jr3> doh
[13:25:56] <StephenLynx> plus you need a callback in the last argument.
[13:26:23] <jr3> I can promisify
[13:26:28] <StephenLynx> ah, promises.
[13:26:33] <StephenLynx> yeah, I wouldn't use them.
[13:26:41] <jr3> why not?
[13:26:49] <StephenLynx> 1: they are not final on ES6
[13:26:54] <StephenLynx> 2: performance issues.
[13:27:26] <StephenLynx> 3: the driver is not built around them
[13:51:50] <jr3> StephenLynx: http://docs.mongodb.org/manual/reference/method/Bulk.find.update/#Bulk.find.update
[13:52:02] <jr3> I'm using the native driver
[13:53:01] <cheeser> i resent that name. :)
[13:53:55] <StephenLynx> you are using the official driver.
[13:53:59] <StephenLynx> and that is not its documentation.
[13:54:20] <StephenLynx> you are on node/io, right?
[13:54:26] <jr3> correct, node
[13:54:29] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/
[13:55:05] <jr3> I'm actually using mongoose
[13:55:09] <StephenLynx> yeah, the driver is not native by itself, but it does uses native code on its foundation, given the chance.
[13:55:13] <StephenLynx> I suggest not using mongoose.
[13:55:25] <jr3> and trying to get into the "native driver"
[13:55:40] <StephenLynx> its slow, bad designed and handles _id poorly.
[13:55:59] <StephenLynx> I just call it "the driver".
[13:56:10] <StephenLynx> because its officially supported by 10gen.
[13:56:18] <jr3> so let me get this straight
[13:56:56] <jr3> mongoose sits on top of the driver, which isn't created by the mongo team?
[13:57:05] <StephenLynx> mongoose isn't.
[13:57:08] <StephenLynx> the driver is.
[13:57:26] <StephenLynx> and I don't know what it sits on top, but I assume its the driver.
[13:57:32] <jr3> ok, so this docs http://docs.mongodb.org/manual/reference/method/Bulk.find.update/#Bulk.find.update
[13:57:48] <jr3> cannot be applied to the driver?
[13:58:16] <StephenLynx> it can.
[13:58:23] <StephenLynx> there is a bulk something
[13:58:38] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/Collection.html#bulkWrite
[14:03:02] <OhMyGuru> hi
[14:04:11] <OhMyGuru> i need to restore data from mongo 2.6 to mongo 3.0, it is about 60GB. I did it, but it's very very slow... about 10.5 hours
[14:04:23] <OhMyGuru> how can i improve this ?
[14:08:08] <deathanchor> how to speed up the restore?
[14:08:16] <deathanchor> use faster disks?
[14:08:54] <cheeser> basically
[14:09:31] <deathanchor> OhMyGuru: are you using --dbpath or connecting to a mongod?
[14:09:39] <deathanchor> --dbpath is always faster for me
[14:19:59] <makinen> ROLLBACK>
[14:20:04] <makinen> what does this mean?
[14:20:55] <makinen> oh and is it possible to convert a replica set back to a standalone server?
[14:21:02] <cheeser> makinen: http://docs.mongodb.org/manual/core/replica-set-rollbacks/
[14:21:23] <cheeser> yes. remove the replSet config from you config file/options and restart
[14:23:37] <deathanchor> can mongodb make a birthday cake?
[14:24:11] <makinen> did it but after restarting it says "error: { "$err" : "not master and slaveok=false", "code" : 13435 }"
[14:41:33] <deathanchor> you are still running it with a replSet option somewhere
[14:43:43] <makinen> ah there was a secondary node still running
[14:45:42] <makinen> I was a bit afraid that I lost the data on the standalone server :)
[14:46:00] <makinen> but still there
[14:47:44] <makinen> now If I want to set up a replica set and synchronize the data from the old standalone server to the new secondary node how should I set up the set?
[14:48:09] <makinen> should I give a higher priority to the old standalone so it would be elected as a primary node?
[14:56:37] <makinen> what is going to happen if the empty node has been elected as primary?
[15:01:32] <deathanchor> did you rs.initiate on the real primary?
[15:02:30] <makinen> yes I did
[15:02:42] <deathanchor> if yes, and new member you add with rs.add() would need to catch up to the primary opid before becoming eligible to be elected.
[15:03:42] <deathanchor> you basically can rs.add() then rs.reconfig one to have a higher priority, or just set your primary to have a higher priority now before you add
[15:05:50] <makinen> so if I call rs.initiate() on the standalone and add the secondary with rs.add(). The secodanry will synchronize with the old standalone?
[15:07:05] <makinen> and after synchronizing the elections might occur but there's no risk to lose data
[15:10:25] <Doyle> Hey. Can you use multiple drivers with a single mongodb setup? Is the driver just the connection interpreter? I know there's a separate component that does discovery now...
[15:12:40] <cheeser> what?
[15:14:23] <Doyle> Can you use the scala and php drivers at the same time for example
[15:14:33] <cheeser> from different apps? of course.
[15:14:59] <Doyle> I thought so, but wanted to be sure. I don't like assuming things.
[15:15:11] <Doyle> tyvm
[15:16:23] <cheeser> np
[15:40:35] <jr3> if I use the underlying native mongo api __v doesn't seem to update, is __v a mongoose thing?
[15:40:46] <cheeser> yes
[15:44:22] <Doyle> LOL "Never set journal.enabled to false on a data-bearing node.
[15:44:22] <Doyle> " Best warning in the docs yet.
[16:07:18] <deathanchor> tokumx has no journaling. but always has crash protection (somehow).
[16:14:01] <sppadic> ok dumb query here - a user with readWrite privileges to db should be able to restore a dump??
[16:14:34] <sppadic> odd message about not knowing what to do with dump file..
[16:15:15] <sppadic> db is version 2.6.9
[16:45:58] <shlant> morning all. Question about ec2 drive options. If I have a read heavy app with a small >1GB dataset, is it even worth it to go with PIOPS? or would SSD be fine? or does magnetic have better IOPS?
[16:46:23] <deathanchor> SSD and Magnetic have IOPS based on the size you pick
[16:46:59] <deathanchor> so if you want lots of IOPS, just get a bigger drive
[16:47:17] <deathanchor> but it's still a shared drive
[16:47:49] <shlant> deathanchor: so magnetic is also IOPS per GB?
[16:48:04] <deathanchor> so someone else could eat up some IOPS from you, hence the provisioned iop drives you get those IOPS no matter what
[16:48:24] <deathanchor> I believe so, just look at the docs on the aws ec2 info
[16:50:09] <deathanchor> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html
[16:51:24] <shlant> deathanchor: very helpful, thanks!
[16:51:28] <deathanchor> this has pretty stuff http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
[16:51:37] <shlant> and is RAM requirments something I have to test?
[16:51:48] <shlant> or sort of based on data size?
[16:52:26] <deathanchor> depends on your use-case, but you minimally want to have your indexes in-memory
[16:53:36] <shlant> yea I had a m3.medium (3.75GB) and it was just starting to swap which plateaued my RPS
[16:53:50] <shlant> so maybe a m3.xlarge
[17:53:30] <makinen> what else should do on a secondary node besides starting mongod with replSet?
[17:53:58] <makinen> on the primary I expanded the replica set with rs.add()
[17:55:10] <cheeser> to init a replSet? that should do it.
[17:56:37] <makinen> > rs.conf()
[17:56:37] <makinen> null
[17:57:12] <makinen> doesn't seem to work
[17:58:24] <cheeser> are you on the primary?
[17:59:09] <makinen> no that was from the secondary
[17:59:39] <cheeser> run rs.status() on the primary and see if it knows of the secondaries yet.
[17:59:57] <makinen> oh primaru found the secondary
[18:00:26] <makinen> though some error occurred on the secondary "errmsg" : "still initializing"
[18:00:41] <makinen> I called rs.initiate() on secondary once
[18:00:46] <deathanchor> no!
[18:00:58] <makinen> :(
[18:01:08] <deathanchor> rs.initiate it initate the replicaSet, you only do that on first machine.
[18:01:22] <makinen> alright how can I revert the secondary node?
[18:01:35] <deathanchor> second machine you just start the mongod with the replSet option and the primary will do all the work
[18:01:55] <makinen> okay :)
[18:02:04] <deathanchor> stop mongod, clear the dbpath directory of contents and start up mongod again
[18:02:14] <cheeser> stop that secondary, remove the data dir (leave the config file if you have it) and bring it up again
[18:02:19] <arussel> anyone knows of a script that sends metrics to graphite ?
[18:02:41] <arussel> I've found plenty for newrelic, but can't find anything for graphite
[18:03:13] <x1337807x> Sounds like an excellent reason to use New Relic
[18:03:31] <arussel> too expensive
[18:03:41] <cheeser> old relic then?
[18:03:45] <arussel> :-)
[18:04:47] <arussel> I'm going to try that one: https://gist.github.com/thpham/9060170
[18:07:53] <deathanchor> makinen: I would highly recommend doing the M102 mongo course (free online).
[18:07:54] <arussel> but I don't even know if this is a daemon or if I should cron it
[18:09:31] <deathanchor> arussel: looks to run once, so a cron I believe
[18:12:46] <arussel> how often do you want to do it ?
[18:14:09] <arussel> would every 10s be an overkill ?
[18:18:18] <joeyjones> Probably
[18:19:15] <joeyjones> crons only run every minute
[18:22:32] <deathanchor> this is going to closely mimic mms
[18:22:54] <deathanchor> just from glancing at what the script does
[18:25:53] <makinen> why is there two mongod instances on a different ports
[18:26:02] <makinen> tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 17247/mongod
[18:26:05] <makinen> tcp 0 0 0.0.0.0:28017 0.0.0.0:* LISTEN 17247/mongod
[18:26:11] <makinen> which one should use with rs.add?
[18:26:41] <deathanchor> you tell us... ps -ef |grep mongod
[18:27:28] <makinen> root 17247 16789 2 21:21 pts/3 00:00:05 ./mongod --dbpath=/opt/mongodata --logpath=/var/log/log --replSet=rs0
[18:28:32] <saml> makinen, 28017 is HTTP
[18:28:48] <makinen> ah alright
[18:30:39] <makinen> "stateStr" : "UNKNOWN"
[18:30:45] <makinen> "errmsg" : "still initializing"
[18:30:57] <makinen> does that mean the secondary is synchronizing with the primary?
[18:36:12] <deathanchor> check logs on secondary
[18:36:19] <makinen> well I guess not since the data directory isn't growing
[18:37:13] <makinen> aa the secondary can't resolve primary's dns address
[18:38:14] <makinen> or Tue Jun 9 21:35:41 [rsStart] getaddrinfo("safetesti.pointsit.fi") failed: Name or service not known
[18:38:17] <makinen> Tue Jun 9 21:35:41 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
[18:38:52] <makinen> the host name and dns name on the primary differs from each other
[18:41:03] <makinen> state changed to RECOVERING
[18:42:24] <makinen> nice it seems to be synchronizing =)
[18:50:33] <deathanchor> makinen: again, I'll say you should do this course if you haven't: https://university.mongodb.com/courses/M102/about
[18:51:35] <deathanchor> hmm.. seemse to have been changed a bit since I took it a few years ago
[18:52:32] <deathanchor> I kinda remember sharding being in there, but I don't see it there now
[18:54:19] <makinen> hmm looks useful course
[18:54:31] <deathanchor> very
[18:55:52] <deathanchor> sharding question: is the number of chunks a good indicator if that shardkey is growing or not?
[18:58:41] <deathanchor> counting chunks is less expensive (time/dblock/etc) than running a .count()
[19:08:30] <bigsky> hi all
[19:08:34] <deathanchor> or do shard chunks also have the padding?
[19:08:37] <bigsky> what mongo xxxname means?
[19:09:25] <deathanchor> bigsky: we don't understand what you are talking about.
[19:10:00] <bigsky> deathanchor: i mean the command with arguament xxxname
[19:10:06] <bigsky> deathanchor: mongo xxxname
[19:14:44] <deathanchor> bigsky: mongo command line takes 2 positional arguments: [database] [scriptfile.js]
[19:17:15] <bigsky> deathanchor: which database it will connect when using mongo without any argument?
[19:17:25] <deathanchor> test
[19:55:32] <Doyle> Does this look about right for a geo distributed replication-set with sharding? https://drive.google.com/file/d/0B5g2nsz5NekdSnB1RVoyeC1FM1k/view?usp=sharing
[20:58:25] <tubbo> hey guys, i've read the docs for mongodb aggregations (http://docs.mongodb.org/manual/core/aggregation-introduction/) but i still don't understand exactly why it behaves so differently from queries
[20:58:53] <tubbo> i'm using mongoid and i'm fairly new to mongo all day long, so bear with me...but we're having problems with an aggregation that takes over 5 sec to respond...
[20:59:10] <tubbo> i want to either get rid of the timeout or optimize the aggregation so that it doesn't time out
[20:59:17] <tubbo> but i can't figure out a great way to do either one of these things...
[21:16:56] <cyrus_mc> Getting FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back . Yet the disk mongodb is on has 1.5 G free
[21:17:16] <cyrus_mc> plenty of inodes free as well
[21:19:21] <GothAlice> cyrus_mc: MongoDB allocates on-disk stripes in power of two sizes. I.e. the first allocation might be 256MB, but the next will be 512MB. Then a GB. (Not exact values, but you get the idea.) Thus: 1.5GB is too small to allocate a new full-size stripe, and it's complaining.
[21:20:30] <cyrus_mc> GothAlice: thank you for the explanation
[21:21:00] <GothAlice> If you "ls -lh" in the MongoDB data directory, you can see the individual stripe sizes. The allocation sizes typically only go up, stripe-after-stripe.
[21:21:23] <cyrus_mc> GothAlice: yep. See that. Allocating 2G files
[21:21:39] <GothAlice> If that's too large, you may be able to switch to using --smallFiles, though I'm not sure what impact that would have on the current stripes.
[21:21:48] <cyrus_mc> GothAlice: thanks again
[21:21:51] <GothAlice> (That option steps it back a few notches on the power of two starting point.)
[21:49:39] <brotatochip> hey guys, so I've created a replica set and I'm in the process of importing a dump from production of ~9.2gb dumped (~34gb live) and it is taking really really long time building an index (about 2 hours) and the IOPS is maxed on the volume (900) with 100% utilization
[21:49:43] <brotatochip> any ideas why this is taking so long?
[21:50:27] <brotatochip> Does mongodb indexing require a shit ton of IOPS?
[23:20:05] <brotatochip> it's also using 30gb of ram
[23:20:33] <brotatochip> so no one has any idea as to the IO demands of MongoDB when indexing?
[23:21:16] <cheeser> i don't, no.
[23:21:47] <brotatochip> This is a bit of a show stopper because I can't rely on mongorestore for disaster recovery
[23:21:56] <brotatochip> :/
[23:22:18] <brotatochip> It's been running for 3.5 hours now, on JUST that indexing operation
[23:25:54] <brotatochip> Even at 4000 IOPS this would have taken 47 minutes
[23:26:06] <brotatochip> How the hell does anyone even use MongoDB in production with numbers like this
[23:32:54] <joannac> brotatochip: is it progressing?
[23:47:20] <jecran> hi guys.using node I am just trying to do a simple query using an $or ... can someone please tell me what I am doing wrong? https://gist.github.com/anonymous/150e1262ee753b7e6057 I can read the db fine without the query
[23:51:08] <joannac> sigh