PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 29th of September, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:14:37] <Rc43> How to give "--rest" to "sudo service mongod start"?
[00:15:01] <Rc43> As I understand it is what I need to get http interface on 28017
[00:17:24] <Rc43> Oh, or just "rest = true" in /etc/mongodb.conf ..
[00:18:49] <Rc43> Hm, doesn't work
[00:24:15] <AlmightyOatmeal> https://jira.mongodb.org/browse/SERVER-17688 <-- open for almost a year and a half but yet the default storage engine doesn't support this feature and https://docs.mongodb.com/manual/reference/command/parallelCollectionScan/ makes no mention of wiredTiger not supporting it
[00:24:19] <AlmightyOatmeal> sad sad day.
[00:32:42] <GitGud> hi there. is it a safe idea to copy the data folder as a back up ?
[00:32:46] <GitGud> manually
[00:39:25] <AlmightyOatmeal> GitGud: you would be better off dumping the data to a backup. if you just copy the directory then you risk inconsistent data amongst a number of other potential problems
[00:39:45] <AlmightyOatmeal> GitGud: referring to a backup vs. filesystem snapshotting
[00:39:54] <GitGud> "dumping the data to a backup" meaning?
[00:40:47] <AlmightyOatmeal> GitGud: actually i am wrong. copying the data directory isn't as fragile as it is on a larger RDBMS
[00:41:02] <GitGud> i see
[00:41:03] <AlmightyOatmeal> GitGud: https://docs.mongodb.com/manual/core/backups/ and more of what i was referring to: https://docs.mongodb.com/manual/tutorial/backup-and-restore-tools/
[00:41:21] <GitGud> well AlmightyOatmeal what im doing is just more a rough copy not for a production solution
[00:41:34] <GitGud> because in production env i'm going to add replica sets anyway
[00:43:34] <AlmightyOatmeal> GitGud: makes sense. looks like mongo gives you a plethora of options then :)
[00:44:37] <GitGud> AlmightyOatmeal, basically have 1 db. 1 repl set. and 1 backup file system copy i do every week
[00:45:07] <AlmightyOatmeal> GitGud: if you use ZFS then you can send filesystem snapshots to a remote host ;)
[00:46:08] <GitGud> AlmightyOatmeal, on the regular ?
[00:49:05] <AlmightyOatmeal> GitGud: https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html
[00:50:20] <GitGud> thanks AlmightyOatmeal i will look into it
[06:41:02] <crazyadm> dbrunCommand{addShard: "192.168.0.41:27018",maxSize: 0,name: "shard1"};
[06:41:07] <crazyadm> why is this one not right?
[06:54:41] <crazyadm> someone teach me how to add shard, add replcation of shard, and enable them
[06:54:52] <crazyadm> so i can start making database and import mysql
[06:55:32] <joannac> https://docs.mongodb.com/manual/tutorial/add-shards-to-shard-cluster/
[06:56:07] <joannac> https://docs.mongodb.com/manual/tutorial/deploy-shard-cluster/
[06:57:30] <crazyadm> "errmsg" : "could not find host matching read preference { mode: \"primary\" } for set shard1",
[06:59:18] <crazyadm> bin/mongod --shardsvr --dbpath /data0/mongodb/shard2db --logpath /data1/logs/shard2db.log --logappend --fork
[06:59:29] <crazyadm> is this the right way to start a shard
[07:01:38] <joannac> it's not a replica set
[07:01:44] <crazyadm> bin/mongod --shardsvr --replSet set1 --dbpath /data0/mongodb/repl11db --logpath /data1/logs/repl11db.log --logappend --fork -port 28018
[07:01:59] <crazyadm> this one is on another server for replica set
[07:02:05] <crazyadm> on 192.168.0.45
[07:02:10] <joannac> https://docs.mongodb.com/manual/tutorial/deploy-replica-set/
[07:02:22] <crazyadm> replica first, shard after?
[07:02:32] <joannac> "...for set shard1"
[07:02:37] <joannac> --replSet set1
[07:02:48] <joannac> pick a name and stick with it
[07:03:57] <crazyadm> sh.addShard( "set1/192.168.0.41:27018" )
[07:04:01] <crazyadm> still same error
[07:04:14] <crazyadm> "errmsg" : "could not find host matching read preference { mode: \"primary\" } for set set1",
[07:07:26] <joannac> connect to 192.168.0.41:27018 and run rs.status()
[07:08:50] <crazyadm> "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76
[07:09:59] <joannac> haven't we already had this onversation then?
[07:10:14] <joannac> if you want to add it as a replica set, it has to actually be a replica set
[07:10:45] <crazyadm> i only want to add shard first
[07:10:58] <crazyadm> bin/mongod --shardsvr --dbpath /data0/mongodb/shard1db --logpath /data1/logs/shard1db.log --logappend --fork
[07:11:06] <crazyadm> this is master shard1
[07:11:26] <joannac> then don't add it as a replica set?
[07:11:29] <joannac> although
[07:11:45] <joannac> you're going to find it much more difficult to convert it to a replica set after adding as a standalone
[07:12:12] <joannac> why not set it up as a replica set first?
[07:12:47] <crazyadm> both master and slave should have --replSet?
[07:13:27] <joannac> master-slave has been deprecated for years
[07:13:37] <joannac> all nodes in a replica set must have --replSet
[07:14:16] <crazyadm> ok then, replica first, then shard?
[07:15:07] <crazyadm> ok?
[07:18:53] <crazyadm> ok joannac ?
[07:20:13] <joannac> crazyadm: just follow the tutorials I linked you to
[07:54:06] <Rc43> Hi
[07:57:00] <crazyadm> replSetReconfig should only be run on PRIMARY, but my state is SECONDARY; use the \"force\" argument to override
[07:57:08] <crazyadm> when i add replica
[09:05:08] <crazyadm> is adding arbiter same as normal replica?
[09:26:21] <Derick> crazyadm: no
[09:26:26] <Derick> crazyadm: it contains no data
[09:45:18] <crazyadm> ok i got them all added to replication
[09:45:35] <crazyadm> now, how to make set1 and set2 shards
[09:51:22] <crazyadm> is there tutorial on sharding
[09:52:57] <crazyadm> when i add a shard, it must be primary or secondary?
[12:37:39] <jrmg> Hello everyone. I've got a question on what's the best way to rebuild a collection that acts as a cache layer. The collection contains aggregations results from another collection (>1M documents). What would it be the best way to rebuild the entire cache collection preventing the process from taking ages? via shell? Multithread client?
[13:17:12] <cheeser> /1/1
[13:19:27] <mumla> hi everyone! does somebody know how to execute db.stats() in java with die mongodb-driver? I just downloaded the newest version 3.4.0beta and the command "getStats" doesn exists as expected according to docs.
[13:26:06] <cheeser> what are you calling getStats() on? pastebin your code somewher.e
[13:34:55] <mumla> http://pastebin.com/ftEh6dbm
[13:38:45] <mumla> @cheeser: see above ;) (If already notice forget this msg)
[14:08:35] <mumla> ok, no answer :(
[14:51:44] <AlmightyOatmeal> is there a secret to using a compound index? i'm trying to query a single field that is part of a compound index but the query planner is telling me that it's going to scan the entire collection instead of using the compound index :(
[15:09:37] <StephenLynx> how id the compound index built and what is the query?
[15:09:41] <StephenLynx> how is*
[15:14:44] <AlmightyOatmeal> StephenLynx: what do you mean by "how is the compound index built"? it's a simple find() with a field that is in the compound index and a corresponding value
[15:15:15] <StephenLynx> no
[15:15:21] <StephenLynx> when you created the index.
[15:15:26] <StephenLynx> that's important.
[15:15:37] <StephenLynx> the fields, the order
[15:15:52] <AlmightyOatmeal> oh. the field i'm querying is one of the last fields in the compound index
[15:16:18] <AlmightyOatmeal> there are maybe 22 fields in the compound index
[15:33:33] <AlmightyOatmeal> mongo decided to stop returning results around 14M documents which is under half of the documents in that collection. so now i need to re-run a find() and upsert all of the 14M documents that mongo has been spending the past 12 hours spitting out at me.
[15:33:37] <AlmightyOatmeal> this is sad.
[15:33:48] <AlmightyOatmeal> mongo is so flipping slow and it's barely using any CPU or disk I/O
[15:33:51] <AlmightyOatmeal> :(
[15:33:58] <GothAlice> One might suggest you have incorrectly structured your data.
[15:36:24] <AlmightyOatmeal> GothAlice: if i'm doing a find() on the entire collection, wouldn't the data structure be irrelivant?
[15:36:48] <GothAlice> AlmightyOatmeal: Not if you're involving a 22-field compound index, no.
[15:37:18] <AlmightyOatmeal> GothAlice: it's doing a collection scan so the index would be irrelivant and i'm not trying to filter on any criteria
[15:37:37] <AlmightyOatmeal> and if that were the case, i would expect the CPU usage to jump, no?
[15:38:14] <GothAlice> Does the data fit entirely in RAM? If not, you're limited by repeated page faulting.
[15:38:21] <GothAlice> (Thus low CPU usage.)
[15:39:03] <AlmightyOatmeal> GothAlice: the entire collection won't fit in RAM but that's a good suggestion, let me check my vm stats
[15:40:24] <GothAlice> However, more generally, a process which requires examining every single record in a collection of millions in a single pass is sub-optimal. At a minimum you're going to need to track progress, handle cursor timeouts, and retry. If bulk write and bulk read is purely the goal, you might also investigate capped collections, which are somewhat more efficient for FIFO use.
[15:44:44] <GothAlice> (Capped collections also provide a means to stream process data instead of buffering it all then processing it all in one go.)
[15:45:37] <AlmightyOatmeal> GothAlice: mongo is grabbing ~1k documents at a time and even if the entire collection doesn't fit into memory, a majority share can fit in memory but that still wouldn't explain the slowness.
[15:46:16] <AlmightyOatmeal> ~8k-~21k faults :|
[15:48:30] <GothAlice> http://s.webcore.io/1x0q2c002y0E/ludicrous-kill.gif — that's a lot of faults!
[15:48:43] <GothAlice> (The image being MongoDB savagely beating your disks.)
[15:49:07] <AlmightyOatmeal> lol
[15:49:39] <AlmightyOatmeal> GothAlice: but i have very little disk I/O happening.
[15:50:41] <GothAlice> AlmightyOatmeal: Faults represent access to memory that was not loaded into the process. It might already be in cache, so the fault would "link" the missing memory into the MongoDB process. This wouldn't show up as disk IO directly.
[15:51:59] <GothAlice> However faulting freezes the thread that faults until the linking is complete, so, it's terrible for performance.
[15:52:31] <AlmightyOatmeal> GothAlice: oh, now that does make sense
[15:52:38] <GothAlice> (It might show up as io-wait, but not as literal IO bus activity.)
[15:53:42] <AlmightyOatmeal> i really really really wish WT supported parallel collections...
[15:53:55] <AlmightyOatmeal> err, parallel collection scans
[17:20:16] <StephenLynx> i told him to work in batches, though
[17:20:47] <StephenLynx> and keeping track of what has been processed.
[17:20:57] <StephenLynx> instead of "GIMME ALL"
[17:32:06] <FrancescoV> Hi all, I try to use mongodb in a docker container but I got this error: 'KeyError: 'DB_PORT_27017_TCP_ADDR'' using this code: client = MongoClient(os.environ['DB_PORT_27017_TCP_ADDR'],27017)
[17:32:09] <FrancescoV> any advice what it should be?
[18:07:21] <StephenLynx> don't use docker.
[18:07:35] <cheeser> not helpful
[18:07:54] <StephenLynx> says you
[18:08:19] <cheeser> yes. says me. and i'd guess FrancescoV would say the same.
[18:10:02] <StephenLynx> if he were to not use docker and it would work, I am quite sure he shouldn't say is not helpful.
[18:11:23] <FrancescoV> I have no experience with mongodb, just starting a basic prototype project using tensorflow & docker
[18:49:26] <StephenLynx> even more reason to not use docker.
[18:49:56] <StephenLynx> http://www.boycottdocker.org/
[18:55:58] <cheeser> dear lord
[19:46:18] <echo083> hello
[20:27:26] <offlim> is there a way to upgrade mongodb from the existing community version to the exterprise version
[20:27:48] <offlim> enterprise version?*
[20:27:52] <cheeser> replace the binaries. restart.
[21:00:40] <edrocks> do you have to do anything special to upgrade a replica set's `protocolVersion` to 1? I just upgraded everything to mongodb 3.2.9
[22:22:19] <DarkCthulhu> I have a question about mongodb replicasets. If I'm running a replicaset in production, I need to always send writes to the master right? How do I discover the master?
[22:22:28] <DarkCthulhu> HOw does a production setup work rather
[22:23:51] <joannac> connect using a replica set connection, the driver will determine which one is the primary
[22:29:01] <DarkCthulhu> joannac, do you have docs for this? How does the driver figure it out?
[22:30:08] <joannac> https://docs.mongodb.com/manual/reference/connection-string/#replica-set-option
[22:30:36] <joannac> it runs isMaster() on each and figures out the replica set topology
[22:31:02] <DarkCthulhu> joannac, Ah! and typically, is that what a connector library for mongo also would do?
[22:31:29] <joannac> no idea
[22:31:39] <joannac> depends if they implemented it correctly or not
[22:35:01] <DarkCthulhu> joannac, Makes sense. Thanks! That's a good starting point for me.