PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 15th of October, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:11:34] <hydrajump> so I'm working on writing a script that will determine which SECONDARY in a replica set is the most in sync with the PRIMARY.
[00:11:58] <hydrajump> looking through the docs I run `use admin` and `db.runCommand( { replSetGetStatus : 1 } )`
[00:12:06] <hydrajump> this returns the same output as rs.conf()
[00:13:21] <hydrajump> can I get at each member, see if they are a SECONDARY and than save it's optime in a variable for comparison using mongo shell?
[00:16:01] <SixEcho> anyone have an issue with slow mongoimport from a pipe/import? using $proc | mongoimport or $mongoimport < file only processes 400/s, whereas $mongoimport --file=file processes 2000/s
[00:16:10] <Boomtime> hydrajump: db.runCommand( { replSetGetStatus : 1 } ) != rs.conf()
[00:17:27] <Boomtime> hydrajump: your other question is largely a javascript question, you just need to manipulate the JSON object that is returned by the replSetGetStatus command
[00:19:27] <hydrajump> Boomtime: sorry I meant to say same as rs.status()
[00:21:28] <hydrajump> ok I will look at some javascripting. Would I need to determine which member is primary first and run the optime comparison from the primary as each member will show slightly different result?
[00:22:15] <Boomtime> yes
[00:22:27] <Boomtime> also, you should try typing rs.status at a shell
[00:22:42] <Boomtime> no brackets, omit the () part
[00:23:12] <Boomtime> it will print how the method is implemented, and you'll see that it is defined as: function () { return db._adminCommand("replSetGetStatus"); }
[00:58:33] <hydrajump> Boomtime: thanks I've got the following for now https://gist.github.com/anonymous/8befd36fc34abe97e182
[01:01:36] <Boomtime> strictly speaking, _adminCommand is internal and might change in a new shell version, you can db.adminCommand({replSetGetStatus:1}) to avoid this possibility
[01:02:26] <hydrajump> oh
[01:02:59] <Boomtime> what you have will work fine, i'm just pointing out you would be better off using the .adminCommand variant for assured forwards compatibility
[01:04:01] <Boomtime> also, you will want to consider what to do if there is no primary
[01:05:42] <hydrajump> Boomtime: no I appreciate the feedback I've made the change. I'm figuring out how this script will be run and than if it is not the primary how it will connect to the primary
[01:07:07] <hydrajump> From a non-mongodb instance I would run `mongo <a repl member>:27017 myjsfile.js`
[01:09:39] <Boomtime> in the shell, you can change targets with either connect() or new Mongo() if you want a connection seperate from 'db'
[01:13:48] <hydrajump> ok new update https://gist.github.com/anonymous/8e695ea317d07eec94c6
[01:13:56] <hydrajump> this is my first javascript btw ;)
[01:15:52] <Boomtime> yeah, I can tell :D
[01:15:59] <Boomtime> your first line has invalid synatx
[01:16:28] <hydrajump> :(
[01:16:41] <Boomtime> I think you are trying to add a method to the "rs" object, which can be done thusly: rs.whois_boss = function () {
[01:17:11] <Boomtime> the syntax you have used is the declarative syntax for the global namespace
[01:18:06] <Boomtime> i.,e this one is fine: function connectToPrimary(primaryMember) {
[01:18:19] <hydrajump> Should I not try to add a methos to "rs" and instead name it with my own namespace, e.g. hydrajump.whois_boss ?
[01:18:37] <Boomtime> it's your shell, you should do whatever works for you
[01:18:58] <Boomtime> from the name and the intention it seems perfectly fine to belong in "rs"
[01:19:06] <hydrajump> ok
[01:19:36] <Boomtime> also, i think "connect" shell method sets "db" for you
[01:20:13] <Boomtime> i.e the connect shell helper is a wrapper that does a graceful version of "db = new Mongo(x)"
[01:20:18] <hydrajump> ok I've changed the two functions rs.whois_boss and rs.repl_lag as you explained
[01:21:39] <hydrajump> for the connectPrimary can I just do conn = new Mongo(primaryMember); I mean I don't need to enter a specific db or should I just set it to admin?
[01:22:41] <Boomtime> yes, but having a connection doesn't let you do anything, all commands are executed against a database
[01:23:23] <Boomtime> so you would have to do conn.getDatabase(...).runCommand (...)
[01:23:30] <Boomtime> or whatever the method is
[01:23:40] <Boomtime> might just be .getDB actually
[01:23:57] <Boomtime> this is also something that connect helper does for you
[01:28:13] <hydrajump> ok I tested just running db = connect("xxx:27017"); and it didn't work unless I specified a db as you said like db = connect("xxx:27017/admin");
[01:30:10] <Boomtime> no, i don't think you need the "db =" part at all
[01:30:41] <hydrajump> ok I thought I needed that to close the connection when i'm done
[01:30:44] <hydrajump> db.close()
[01:31:00] <hydrajump> or does it close when I exit that connection?
[01:31:02] <Boomtime> there is no db.close()
[01:31:12] <Boomtime> db points to a database not a connection
[01:32:25] <Boomtime> also, you should look at the implementation of connect
[01:32:31] <Boomtime> just type "connect" at a shell
[01:32:42] <Boomtime> you'll see the last line says "return db"
[01:33:01] <Boomtime> thus, db = connect(...) amounts to db = db if successful
[01:39:27] <hydrajump> ok I changed that connect function https://gist.github.com/anonymous/394c13d3b6ca984c768e#file-gistfile1-js
[01:39:51] <hydrajump> I hope I'm on the right track now and only missing the optime comparison
[02:07:54] <Boomtime> hydrajump: I just remembered (I'm so sorry), db.isMaster() tells you who the primary is directly
[02:08:15] <Boomtime> well, db.isMaster().primary
[02:18:42] <hydrajump> Boomtime: I would still need to connect to any one member to check that, right?
[02:28:06] <Boomtime> hydrajump: yes
[02:28:36] <Boomtime> it saves you the javascript iteration of the array looking for the server status of PRIMARY
[02:28:57] <Boomtime> a single document tells you definitively if that member can see a primary
[02:29:04] <Boomtime> and if so, who it is
[02:36:39] <hydrajump> Boomtime: thanks for that. https://gist.github.com/anonymous/8fd94be4f720e8529928
[02:37:41] <hydrajump> I think I've got all the logic for finding who is primary, if not connect to the primary, get the optimes of primary and all secondary members, computer the optimediff and chose the lowestoptimediff.
[02:37:56] <Boomtime> yep
[02:38:21] <Boomtime> be aware that your code now has a concurrency problem by calling db.isMaster twice
[02:38:42] <Boomtime> it could have changed and the second call might be blank
[02:39:09] <Boomtime> store the result instead: var im = db.isMaster()
[02:39:32] <Boomtime> then if( im.primary ) <- check if not an empty string
[02:40:14] <hydrajump> fixed thank you!
[02:40:47] <hydrajump> does the code look ok otherwise..maybe not the most efficient or prettiest..
[02:41:13] <Boomtime> i did not check your algorithm, that is debugging territory - your approach looks right
[02:41:54] <Boomtime> i.e map/reduce use or timediff comparisons etc. those are yours to get right - again, the approach sounds solid
[02:45:53] <hydrajump> ok I'm unsure of my lowestoptimeDiff. I will have to look at that. in My manual testing the diff is the same for each secondary, but as you said that might not always be the case.
[03:14:17] <SixEcho> anyone have an issue with slow mongoimport from a pipe/import?   using $proc | mongoimport   or  $mongoimport < file   only processes 400/s, whereas  $mongoimport --file=file  processes 2000/s
[03:16:27] <Boomtime> SixEcho: apparently no-one has (I saw you asked earlier but forgot sorry), certainly I have not heard of that before
[03:17:37] <SixEcho> Boomtime: it's pretty easy to test if you have a larger import file, just time piping it in vs using the -f option
[03:18:23] <SixEcho> Boomtime: I saw complaints about the stdin read buffer size back in 2011, but didn't see any links past that to an open ticket/etc.
[03:23:45] <Boomtime> SixEcho: with your pipe command, what is "file" - is it an existing file on disk or output from some process?
[03:24:59] <SixEcho> either… can be process | mongoimport… or mongoimport < file… or cat file | mongoimport ~ all are substantially slower than mongoimport -f
[03:26:04] <Boomtime> curious, that is not what i would expect
[03:26:13] <SixEcho> Boomtime: perhaps there is some buffering problem?
[03:27:33] <Boomtime> could be
[03:31:06] <SixEcho> Boomtime: https://jira.mongodb.org/browse/SERVER-2641
[03:32:48] <Boomtime> which leads to this: https://jira.mongodb.org/browse/TOOLS-71
[03:34:23] <Boomtime> SixEcho: looks like you independently solved the known issue, too bad there is no solution other than --file
[03:34:41] <Boomtime> solved = discovered
[03:35:34] <SixEcho> Boomtime: i'll have to live with it, importing multi-GB files which are highly compressed, but must uncompress them to a file instead of piping… #lame
[03:36:59] <Boomtime> indeed
[03:38:51] <hydrajump> Apart from the optime logic/algorithm. Boomtime can you please take another look at my script and how I think it should be invoked https://gist.github.com/anonymous/7b95635ff1b0ca95b740 I've added the snapshotting for the EBS volumes. I think all the logic is there now. I will need to refactor the comparison and aws part.
[03:44:01] <Boomtime> i can't speak for most of your additions, although i have one question: any reason to lock before querying for EBS volumes? seems like something you can do without the lock and would reduce your time spent inside the lock
[03:44:29] <Boomtime> i.e createEBSSnapshots() seems like the only thing that should be inside the lock/unlock section
[03:45:08] <Boomtime> also, i really couldn't tell if any of those commands are correct
[03:45:43] <hydrajump> Boomtime: you're absolutely right. I've put the snapshots function between the lock/unlock section. Thanks for spotting that!
[03:46:52] <hydrajump> yeah the AWS commands just need some more work, but I was interested in feedback on the mongo specific stuff. If I run this javascript from a remote machine using `mongo remote_host:27017/admin myjsfile.js`
[03:48:46] <Boomtime> give it a try, you can always do something inncuous (debug print?) in place of the snapshots as a test
[03:50:51] <hydrajump> sure. If my script has runProgram than if I understand how that works it will try to run the aws commnad line tool on the mongodb host. This is probably not a good idea and it would be better if it was run on the remote admin machine.
[03:51:41] <Boomtime> runProgram can only run it from wherever the script is executing - i.e wherever the mongo shell is running
[03:51:54] <Boomtime> is that what you meant?
[03:52:39] <hydrajump> oh I thought that `mongo remote_host:27017/admin myjsfile.js` connects to the remote_host and runns the myjsfile on that mongodb instance.
[03:53:05] <hydrajump> so runProgram would look for aws tools on the mongodb instance.
[03:53:56] <Boomtime> that would be an enormous security risk
[03:54:09] <Boomtime> the js file executes in the shell
[03:55:08] <Boomtime> consider the js file exactly the same as if you pasted the entire contents into the shell prompt - it's basically the same
[04:22:08] <hydrajump> Boomtime: I just ran a quick test to understand and I ran runCommand("touch", "testing") and that file was created on the admin machine not on the database. I'll test each part of my script tomorrow and hopefully it all goes well. Thanks for all you help Boomtime!
[08:15:14] <quattr8> i’m using the $text search, i’m trying to do a search for #Again how would i match something with a hashtag starting?
[08:15:26] <quattr8> searching for “Again” won’t get any results
[09:31:21] <zell> hello there, I put javascriptEnabled = false on my mongodb configuration to avoid injection. Is There a way for me to still use javascript somehow (with a cron task for instance) ?
[10:25:57] <dainis> Hi I have a question about replication, we are seeing upto 10s lag when executing rs.printSlaveReplicationInfo() command, ping using rs.status() is 0, disk utilization is at most 5%, when we turn on any application which performs write we are seeing 100ms up to 5000ms delays, iostat still reports only up to 5% disk utilization, mongodb is running on 16Gb 4 core instances
[11:00:31] <Siyfion> is there a way to update all documents in a collection, such that I can set a value on a subdocument in an array?
[11:01:26] <Siyfion> ie. { "fields.data": "TEST" } where fields is an array of subdocuments?
[11:13:15] <joannac> Siyfion: nope. script it
[11:13:45] <joannac> if you're needing to do this a lot, it might be a sign that those subdocuments should be top level documents
[11:14:31] <cheeser> http://docs.mongodb.org/manual/reference/operator/update/positional/#up._S_
[11:17:13] <Siyfion> joannac: Okay thanks for that
[11:17:48] <Siyfion> joannac: So basically, iterate through each document, update the subdocument, save. Rinse & repeat?
[11:23:38] <cheeser> or use the $ positional operator.
[11:25:29] <dzelzs> hi everyone, I have encounter strange problem with mongo. My program sending to mongo data roughly 1MB/s, and 92 seconds of 100 is taken by that process. I'm using mongo-cxx-driver, legacy branch. Mongo and program is placed on the same PC. Could anyone say, why is it so slow?
[11:32:52] <Siyfion> cheeser: I don't think that will work as I need to update more than one subdocument
[11:33:05] <Siyfion> cheeser: the positional only works with the first occurrance
[11:33:21] <cheeser> ah. yeah. then you're stuck iterating.
[11:33:36] <Siyfion> cheeser: Yuuuurrrp :(
[11:34:11] <cheeser> time to go catch a train home. good luck. :)
[11:36:35] <Bobsumo> Hey, wondering if anyone could share some advice about streamlining our aggregations which currently run on a three node replica set
[11:37:33] <Bobsumo> We've got about 120million documents stored and, cough, our little setup's run out of juice. Queries taking toooooo long.
[11:38:42] <Bobsumo> What's the generally recommended solution to this other than running nightly mr job
[12:44:59] <edrocks> is it bad to have an id for each item in an array?
[14:34:40] <spenguin> I'm not sure if this is the right place to ask about the node mongodb driver, but I'm trying to basically ensure that when inserting into a collection, no two documents will share the same values for a pair of fields. Here's what I have so far https://gist.github.com/rimunroe/1cfd6353378a2b977bde
[14:35:07] <Derick> spenguin:
[14:35:09] <Derick> erm
[14:35:34] <spenguin> is it possible to do this during the insert?
[14:35:59] <spenguin> (or do I have to call ensureIndex with the unique: true option?)
[14:36:02] <Derick> no, uniquess can only be guaranteed with (unique) indexes on a per-document level
[14:39:16] <spenguin> ah okay, thanks
[15:26:59] <julianhlam> Hi all, I have a question about my mongo install. I have two upstart scripts in /etc/init, one is mongod.conf, and the other is mongodb.conf
[15:27:11] <julianhlam> is one left over from when I accidentally installed Mongo 2.4?
[15:27:46] <Derick> yes
[15:28:27] <julianhlam> thanks Derick -- am I correct in assuming that Upstart started mongodb.conf first, and then errored out when it tried to start mongod.conf?
[15:28:35] <julianhlam> and if so, is there a log of that somewhere...
[15:29:10] <Derick> julianhlam: sorry, I don't know which one is right. It confuses me too ;-)
[15:29:31] <Derick> in any case, I only have "mongodb.conf" in /etc/init
[15:30:10] <julianhlam> Derick, ok, good to know.
[15:33:21] <julianhlam> Derick, looking at the mongodb.org guide, which seems to suggest mongod.conf is the one they use now
[15:33:21] <julianhlam> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[15:33:26] <julianhlam> aaaanyhow...
[15:44:23] <bMalum> hey - i’m new to MongoDB and i just realized there is no Master-Master replication … so a “non master” Node cant be read or written?
[15:44:49] <Derick> you can read from them, but not write to them
[16:11:54] <bMalum> Derick, can i write to only one but read from every?
[16:12:34] <Derick> bMalum: yes
[16:12:46] <bMalum> so i have to implement an Layer above to create a sync between DBs or CouchDB is Master-Master Replication but i have never used it :/
[16:13:10] <Derick> why do you need multi master writes?
[17:03:17] <kakashiAL> I am using restClient and sending a POST, its works, if I send a GET I get no error at all, restClient only gives me a progress bar with "sending data..." and after a wile I get my normal restClient screen back with no response or error
[17:09:51] <ngl> Is there a flag in the options of createCollection to say, "Force overwrite?" I am finding there is no problem if the collection I want to simply overwrite fresh has any documents. I get a WriteConcern error. I want to say, "Really, I want you to wipe that collection out/overwrite it.
[17:09:53] <ngl> "
[17:16:06] <ngl> I took this: if(errorOptions.w && errorOptions.w != 0 && callback == null) {
[17:16:40] <ngl> And tried to add: .createCollection('templates.files', {w:0}, function(err, result) {
[17:18:08] <ngl> Alas... Error: writeConcern requires callback
[17:20:58] <bMalum> Derick - i have to build a Database which is share to all nodes - and they have different or other Frontend Layers above - but every change should be share to all DBs …
[18:59:42] <mrmccrac> i know about sh.status(), but is there any command that will return output thats better to parse?
[18:59:49] <mrmccrac> particular for chunks-per-shard
[19:00:30] <kakashiAL> I am using mongoose and its find() method but if I try to execute it with restClient, it send data but I get no respond until it gives up, in my terminal I get only "GET /person - - ms - -"
[19:10:12] <spenguin> is there a way to use the node driver to insert a document into a collection without generating an _id field?
[19:10:56] <spenguin> I have a different indexed field
[19:11:56] <cheeser> you have to have an _id field
[19:15:04] <spenguin> oh okay
[19:17:25] <kakashiAL> anyone?
[19:51:19] <mrmccrac> i know about sh.status(), but is there any command that will return output thats better to parse?
[19:51:19] <mrmccrac> 2:59
[19:51:19] <mrmccrac> particular for chunks-per-shard
[19:59:24] <cheeser> sh.status().someField ?
[20:04:13] <mrmccrac> no it does a print so returns undefined
[20:04:27] <mrmccrac> why it doesn't return an object i dont know
[20:05:20] <cheeser> no, it returns a document. it just gets printed out by default.
[20:05:32] <cheeser> rs.status() and db.status() behave the same way.
[20:06:21] <mrmccrac> sh.status() does not return a document
[20:06:52] <mrmccrac> it returns undefined.
[20:07:48] <mrmccrac> db.stats() returns an object
[20:10:43] <joannac> mrmccrac: you're right. "The db.printShardingStatus() in the mongo shell does not return JSON. Use db.printShardingStatus() for manual inspection, and Config Database in scripts."
[20:12:00] <cheeser> consistency!
[20:12:05] <mrmccrac> =\
[20:12:14] <cheeser> oh, well.
[20:13:52] <joannac> rs.status() is a document though
[20:15:08] <mrmccrac> i think i found how to get what im trying to determine at least, using db.chunks in the config database
[20:15:11] <joannac> the chunks / shard thing is a one line aggregation
[20:15:29] <joannac> I had to do it yesterday, it's super short
[20:16:02] <cheeser> i'll have to bug someone about that tomoroow
[20:21:40] <bpap> how do I add an EXISTING mongodb replica set to have its metrics monitored by MMS?
[21:06:48] <KurtisJ> Hello, I'm having trouble sharding across a cluster and was wondering if someone could help. When I attempt to enable sharding, I receive an error saying "error creating initial database config information :: caused by :: can't find a shard to put new db on". When I attempt to add a shard, however, I receive an error saying "could not verify conf
[21:06:48] <KurtisJ> ig servers were active and reachable before write". I've searched around and found nothing about this!
[21:13:24] <KurtisJ> anyone?
[21:17:39] <KurtisJ> surely somebody can help :D
[21:20:38] <mrmccrac> is this on linux? are they across different hosts? is iptables opened to allow them to communicate on the necessary hosts?
[21:20:40] <mrmccrac> just guessing
[21:20:53] <mrmccrac> necessary ports*
[21:21:58] <KurtisJ_> mrmccrac: hello! yes this is CentOS, and I'm trying to shard across 3 separate nodes
[21:22:24] <mrmccrac> in order for them to be able to communicate you'll need your iptables config to allow them to talk to each other i think
[21:22:49] <mrmccrac> if you're behind an enterprise firewall or something and are comfortable actually doing this you can shut off iptables and try it again
[21:22:57] <mrmccrac> and see if thats indeed the problem
[21:23:16] <mrmccrac> turning off iptables is only recommended if you know what you're doing :)
[21:25:26] <KurtisJ_> ahhh okay. I may have my iptables incorrectly configured. I think I'm allowing the daemon but not the config server. one moment please!
[21:25:34] <KurtisJ_> by the way thank you for your help!
[21:32:51] <KurtisJ_> mrmccrac: it worked!!!!!!!!! you have NO idea how much this has helped. The ansible roles I was using hard-coded the iptable Port for the config servers (incorrectly), so it was opening the wrong port!
[21:41:07] <mrmccrac> KurtisJ: its a common problem for many things other than mongo
[21:42:26] <mrmccrac> sometimes my iptables has been wrong when i was almost positive it wasn't
[21:48:17] <KurtisJ_> mrmccrac: that's really good to know. I've never had to look very closely at them before, but I certainly will now. Thanks again!
[21:48:23] <mrmccrac> no problem
[22:06:01] <WynterCove> I'm pretty new to mongodb, but was using it to store a lot of data for a small project. Somehow, all my data disappeared, and I don't know why. No backups, but journaling was on. Any way I can try and recover my data?
[22:06:58] <WynterCove> All the databases were still there, but there were no collections. Using a single server installed on Ubuntu using default config.
[22:09:24] <joannac> WynterCove: logs?
[22:18:35] <krawek> hello, I have a document without _id I don’t know why. how can I fix that?
[22:20:43] <WynterCove> joannac: http://paste2.org/y5Z2aa9M I know the data was there at 10am this morning, so I only pasted the logs after that.
[22:21:02] <WynterCove> I don't see anything in there that might indicate a server failure, but I've only skimmed it.
[22:21:21] <daidoji> krawek: it means that some other key has been set as your _id
[22:22:05] <daidoji> krawek: or this maybe http://stackoverflow.com/questions/12378320/mongodb-inserting-doc-without-id-field
[22:30:15] <joannac> WynterCove: repairDatabase
[22:31:58] <WynterCove> Tried that. Didn't help.
[22:32:49] <joannac> WynterCove: no, i mean, the repair database has the chance to lose data
[22:32:56] <joannac> also, the bunch of drops in the logs?
[22:33:08] <joannac> search for "CMD: drop"
[22:34:09] <WynterCove> Found CMD: drop in the logs. Somehow, it dropped every single collection in every database. I don't know where that came from.
[22:35:08] <joannac> all your connections are from localhost
[22:35:38] <WynterCove> Yes, they are. It's running on my local machine, and only being used by applications running on the same machine.
[22:36:34] <joannac> okay, well that's not really helpful for backtracking :(
[22:37:00] <joannac> anyway, there you go.
[22:37:34] <krawek> joannac: do you know something about my case? it’s a document without _id (out of millions)
[22:38:02] <joannac> krawek: the other docs in your collection have an _id?
[22:38:14] <krawek> yeah
[22:38:18] <krawek> this is really weird
[22:38:57] <joannac> not sure
[22:39:26] <WynterCove> joannac: So, I guess there's no way to recover the data, is there? Since all the collections were dropped?
[22:39:30] <krawek> what can I do to fix it? If I try to assign it an id a new one is created
[22:40:00] <joannac> WynterCove: do you have a support contract?
[22:40:23] <joannac> krawek: you can't update it to have an _id?
[22:40:58] <krawek> it didn’t work using mongoid, let me try using the mongo console directly
[22:41:22] <WynterCove> No. I installed this all on my own, for my own personal use. Thankfully, it was just a bunch of data for several small personal projects. It only affects me, and nobody else.
[22:41:50] <krawek> "errmsg" : "The _id field cannot be changed from {_id: null} to {_id: ObjectId('543ef805277ea5f8c8807eb8')}."
[22:42:04] <krawek> joannac: ^^
[22:42:21] <kali> krawek: you can't update id, you need to delete the document ant save it again
[22:42:51] <krawek> is it safe to delete?
[22:42:59] <kali> krawek: nope :)
[22:46:54] <joannac> krawek: how did you get the document inserted in the first place?
[22:47:08] <krawek> I’m not sure
[22:47:29] <joannac> kali: why do you think it's not safe to drop and reinsert with proper _id?
[22:48:08] <krawek> I don’t know I’m just asking
[22:49:09] <krawek> db.things.remove({_id: null}, {justOne: true}) ?
[22:49:10] <kali> joannac: deleting is never safe, you actually erase things :)
[22:53:11] <joannac> :p
[22:53:46] <joannac> presumably he would do a find, save the document, delete the single document, and then reinsert the document he saved with an _id field
[23:29:41] <znn> how do you copy or clone a database from your localhost to a remote host using copydb?
[23:29:51] <znn> where exactly is copydb, i can't find it on my system
[23:30:13] <cheeser> http://docs.mongodb.org/manual/reference/method/db.copyDatabase/
[23:30:42] <znn> cheeser: i saw that
[23:30:51] <znn> that is for copying one database to another on the same host
[23:31:18] <znn> i need to send it to a remote host
[23:31:32] <znn> copydb or clone are what i need, and i choose copydb
[23:31:54] <znn> except that i don't know if copydb is a command line thing or a command to use within the mongo shell
[23:32:04] <znn> i don't think it's a mongoshell command
[23:32:05] <joannac> znn: there's clearly a "fromhost" parameter
[23:32:13] <joannac> and it's something you run in the mongo shell
[23:32:26] <znn> db.copyDatabase is something you run in mongoshell
[23:32:29] <znn> what about copydb?
[23:32:54] <joannac> copyDB is an admin command
[23:33:28] <znn> joannac: so that's a cli thing?
[23:33:32] <joannac> there's a wrapper (which is db.copyDatabase)
[23:33:33] <znn> not a mongo shell commadn
[23:33:38] <joannac> but if you want to run it yourself
[23:33:40] <joannac> http://docs.mongodb.org/manual/reference/command/copydb/#copy-on-the-same-host
[23:33:52] <cheeser> copyDatabase wraps the copydb command. says so right on the page.
[23:33:54] <joannac> actually, since you want to copy across hosts, http://docs.mongodb.org/manual/reference/command/copydb/#copy-from-a-remote-host-to-the-current-host
[23:33:58] <znn> cheeser: i see that
[23:35:13] <znn> joannac: do you know if it will use my ssh keys for logging in?
[23:35:31] <znn> the section below it
[23:35:32] <znn> nevermind
[23:36:19] <znn> hmm... looking at the docs, it's still seems to be the same stuff i was reading earlier
[23:36:33] <znn> "Copy from a Remote Host to the Current Host"
[23:36:44] <znn> i wan to copy from current host to remote host
[23:36:51] <joannac> then log into the other host?
[23:36:59] <joannac> and do the copyDB from there?
[23:37:59] <harph> is there a way I can see the incoming queries in real time? something like tailing a log or something like that?
[23:38:09] <znn> i would need transfer keys around
[23:38:20] <cheeser> easy enough
[23:38:20] <znn> maybe i just want to copy a database to a file and then import that file from the remote
[23:38:25] <joannac> harph: you could turn on profiling I guess
[23:38:48] <joannac> znn: huh?
[23:38:58] <joannac> you log into the remote host with your own keys
[23:39:15] <joannac> you connect from that host to your mongod for the copyDB
[23:39:20] <joannac> what key moving is there?
[23:39:33] <harph> joannac: it's set to 2 but I only see stuffs about connections in the logs but I don't see any query coming in
[23:39:37] <znn> joannac: serialize the mongodb db into some format, say json, and import that json into antother mongodb db on another server
[23:40:44] <znn> cheeser: it's a remote host i don't own, i wouldn't know how to export my keys so that i wouldn't be compromising my security, i.e. i need a private key to login to my machine, and copying my private key to the server is risky
[23:40:59] <znn> i could make a temporary one and then i would only be temporarily insecure
[23:41:12] <cheeser> mongodump. scp files. mongorestore on remote host.
[23:41:20] <znn> there we go
[23:41:24] <cheeser> magic!
[23:58:07] <joannac> I'm still confused by znn's problem, but you seem to be happy, so all good