PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 28th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:30:30] <justinsf> Hello, is there anyway to speed up creating indexes, after running mongoimport? All of our indexes have { background: true }. Our server has 16 cpus, but noticed only 1 cpu is being used to index. Is there anyway to make creating background indexes multi-threaded?
[00:33:37] <Doyle> justinsf, indexing during initial sync drives me nuts. Same behavior. Get a DB, index, get a db, index, seemingly single threaded
[00:34:08] <Doyle> I know there was supposed to be an upgrade to initial sync between 2.6.? and 2.6.7 to enable 32 threads of replication, but I don't see it.
[00:34:19] <justinsf> Like I think this literally could take hours to create the indexes.
[00:34:23] <justinsf> We are running latest 3.0.6
[00:35:03] <Doyle> Yea, indexing sucks
[00:35:09] <justinsf> :) LOL
[00:35:29] <justinsf> I'm guessing indexes is a big ole global lock or something
[00:35:41] <Doyle> 2015-08-28T00:32:08.593+0000 [clientcursormon] replication threads:32 << :P
[00:36:41] <Doyle> It shouldn't be
[00:36:42] <justinsf> Is there a configuration option I need to set in config to increase concurrently of creating indexes?
[00:37:32] <Doyle> good Q
[00:39:24] <justinsf> from the doc: " The background index operation uses an incremental approach that is slower than the normal “foreground” index builds. If the index is larger than the available RAM, then the incremental process can take much longer than the foreground build."
[00:40:43] <Doyle> That's just the index though, not the working set for the index, right?
[00:40:51] <Doyle> The index should be tiny most of the time
[00:41:09] <justinsf> Yeah I mean I have a ton of free memory on the server
[00:41:20] <justinsf> yet only still using 1 cpu at 100% pegged
[00:42:58] <Doyle> My data's just silly I guess. Is 20M objects a lot?
[00:44:15] <justinsf> Our mongodump is around 6 GB
[00:44:34] <Doyle> that should wrap in seconds then
[00:45:07] <justinsf> how do I tell what is going on, is there commands I can run at mongo shell
[00:45:22] <Doyle> currentOp
[00:45:25] <Doyle> mongostat
[00:46:06] <Doyle> I like to run tmux with the following: top, iotop, iostat -xmt 1, iftop, tail -f mongod.log, mongostat, plus a pane for mongoshell
[00:46:48] <Doyle> the log will tell you pretty much what's going on
[00:55:50] <justinsf> +1 mongostat
[00:55:52] <justinsf> thanks
[00:56:30] <Doyle> np
[00:56:47] <Doyle> keep an eye on your IO wait
[00:57:01] <justinsf> Though when I try and run mongostat with --username
[00:57:03] <justinsf> I am getting:
[00:57:04] <justinsf> 2015-08-28T00:53:33.236+0000 --authenticationDatabase is required when authenticating against a non $external database
[00:57:05] <Doyle> in top, the 0x0 wa number
[00:57:37] <Doyle> ah, use --authenticationDatabase admin
[00:57:53] <Doyle> it'll default to test or some junk in a lot of cases
[00:58:11] <justinsf> *0 *0 *0 *0 1 2|0 0 48.1G 97.0G 13.2G 0 0|0 0|0 300b 11k 27 brandcast PRI 00:54:5
[00:58:22] <Doyle> there you go
[00:58:34] <justinsf> what is mapped and virtual size
[00:58:40] <justinsf> mapped is 48GB
[00:58:47] <justinsf> and virtual is 97GB
[00:58:50] <Doyle> https://docs.mongodb.org/v3.0/reference/program/mongostat/
[00:59:26] <Doyle> the qr|qw ar|qw are important columns. If your servers' getting slogged those values will just climb
[01:22:34] <morenoh149> brandcast :)
[02:10:13] <Lnxmad> hey, anyone have advice on a "user" collection. storing username, hashed password, but also having an embedded "profile".
[02:10:21] <Lnxmad> vs a seperate collection for the profile
[02:12:00] <Lnxmad> I am contemplating, incase I ever want to offload the user collection to "more" secure db
[02:46:30] <quasiben> hi
[02:47:29] <quasiben> I'm working with an older version of mongo: 2.5 or 2.6? in any case, i'm helping to develop a web application to store 3rd party auth information and was looking for guidance
[02:48:42] <quasiben> So UserA has login credentials to the site and hey store credentials for digital ocean or some other service. Are there recommendations on how to store those 3rd party creds in away which is secure and can only be read by UserA
[03:14:44] <sellout> Well, this was a surprise, `isNumber(NumberInt(0)) == false`
[13:27:22] <doc_tuna> mbwe: changed how?
[14:15:15] <deathanchor> what's the operator to query that the array of a field is not empty?
[14:16:53] <mbwe> doc_tuna: say i have a document like {name: "mbwe", age: 12} and if i update that document with exactly the same content for that document, i don't want that update go trough
[14:18:15] <doc_tuna> dont send it then
[14:18:30] <doc_tuna> you should keep track of whether it needs writing or not yourself
[14:19:31] <doc_tuna> it's a no-op if you send it with the same contents but you are still wasting the databases cpu and network by sending the update, it would be better not to send it
[14:27:34] <mbwe> i can't because those documents get updated by different apps, doc_tuna
[15:52:22] <hernan82arg> hi @all
[15:52:50] <hernan82arg> I've been researching a bit and I couldn't find any document how to do what I want to do
[15:53:12] <hernan82arg> I'm building a three replica set configuration of mongodb
[15:54:02] <hernan82arg> and I want to configure an endpoint (DNS entry) that points to the primary node of the mongo cluster
[15:54:13] <hernan82arg> can you point me in the right direction?
[15:55:20] <cheeser> the primary could switch at any moment...
[15:55:34] <hernan82arg> yes
[15:55:56] <hernan82arg> so the DNS entry should follow the primary
[15:56:39] <Derick> no, don't try to outsmart the replicasets and leave it to the drivers to find out where the master is
[15:58:34] <hernan82arg> okay, so the application connecting to the cluster should know which node is primary then/
[15:58:36] <hernan82arg> okay, so the application connecting to the cluster should know which node is primary then?
[16:02:48] <Derick> hernan82arg: the driver will know and/or figure it out
[16:02:59] <Derick> you give your app a connection string with the majority of the hostnames
[16:03:03] <Derick> for example:
[16:03:23] <Derick> mongodb://one.yourdomain.com,two.yourdomain.com:27017/
[16:03:27] <Derick> (and the rest)
[16:03:32] <hernan82arg> awesome
[16:03:51] <hernan82arg> that's a lot easier than I thought.
[16:03:53] <hernan82arg> thanks!
[19:16:58] <terminal_echo> when you export mongo documents into json, is it actual json?
[19:17:14] <terminal_echo> i have done so but it seems to be malformed
[19:20:26] <StephenLynx> no, its bson
[19:20:34] <StephenLynx> a superset of json.
[19:20:39] <StephenLynx> it isn't all valid json.
[19:20:41] <terminal_echo> what about csv?
[19:20:49] <StephenLynx> no idea, never used mongo export.
[19:20:50] <terminal_echo> that's all valid csv?
[19:21:08] <StephenLynx> however
[19:21:13] <StephenLynx> it might actually be valid json, though
[19:21:30] <terminal_echo> lol what
[19:21:31] <StephenLynx> it might use objects to abstract the bson types into valid json types
[19:21:41] <StephenLynx> dunno
[19:40:09] <cheeser> terminal_echo: yes, it's "actual json" though the file, itself, is not
[19:40:26] <cheeser> each line is a valid json document but collectively the file won't parse as a single json doc.
[19:45:29] <terminal_echo> i'm going to translate a mongodb to a sql database
[19:45:58] <terminal_echo> through, csv export, rsync, BULK insert of csv with SQL and updating the info with temporary tables
[19:46:11] <terminal_echo> someone tell me that's crazy because it seems extremely simple and elegant to me
[20:02:27] <StephenLynx> I wouldn't do it like that, though.
[20:02:47] <StephenLynx> I would write a script to migrate it streaming documents
[20:03:02] <StephenLynx> like
[20:03:08] <StephenLynx> "read doc, insert, repeat"
[20:11:01] <cheeser> i'd look at sqoop or flume which already do something like this.
[20:16:30] <terminal_echo> maybe, but the export csv is streaming, the rysnc in streaming, and the bulk insert is quite fast..
[20:29:38] <Kamuela> can any basic mongodb instance be connected to with mongodb:// ?
[21:11:32] <saml> what's downside of using string as _id ? Most of my queries will be db.docs.find({_id: <url> })
[21:12:06] <saml> or i can leave _id alone and create a new field, url, which has unique index. not sure what's better
[21:12:26] <terminal_echo> cheeser: thanks for the sqoop idea, this is definitely the way to go, i take it its easy to transfer from mongod -> MSSQL?
[21:23:31] <MacWinne_> if I want to insert an activity document into a collection, but I want to not insert the document if a specified set of fields already match in the collection already, what would be the best way? I want to do this atomically... I'm not looking to upsert since I don't want to update the existing record
[21:23:52] <MacWinne_> is there a specific collection command I need?
[21:26:11] <MacWinne_> also I can't use indexes for it beacuse the sets of fields I'm looking for can vary
[21:38:50] <diegoaguilar> hello I have documents in a collection with a onDemandPoints attribute looking like this
[21:38:51] <diegoaguilar> Ahorita te pague normal y lunes o martes espero poderte confirmar cuando cambiamos a asimilados
[21:38:56] <diegoaguilar> lol, please ignore that
[21:39:04] <diegoaguilar> like this: http://kopy.io/XGncn
[21:40:24] <diegoaguilar> I must say that onDemandPoints is an attribute for each object inside a players array attribute
[21:40:56] <diegoaguilar> so document looks like {players: [{id: "", onDemandPoints: {totalPoints: 15}}]}
[21:42:17] <diegoaguilar> how can I find "documents with a document with value "something" and onDemandPoints with value x inside players array?
[21:57:19] <Nikesh> Has anyone been successful to get MongoDB working with Ubuntu 15.04?
[21:57:29] <diegoaguilar> Ive no tried Nikesh
[21:57:34] <diegoaguilar> what problem are u having?
[21:58:37] <Nikesh> Well, I'm just considering which Ubuntu to install on a new machine. I've seen that Mongo doesn't officially support 15.04 yet, so I wanted to see if others got it to work
[21:58:43] <Nikesh> Otherwise I'll just use Ubuntu 14.04
[21:58:53] <diegoaguilar> oh well the best u can do is to install any of the LTS
[21:58:57] <diegoaguilar> 120.4 or 14.04
[21:59:15] <Nikesh> OK
[23:01:16] <roo00t> i don't know mongodb right now but i want to use it with my RESTful API. Could someone tell how the json file looks like so that i can test my code
[23:01:43] <roo00t> any resource for quick guidence
[23:09:54] <Boomtime> roo00t: json.org?
[23:11:32] <roo00t> Boomtime: are there mongo dump available for testing?
[23:12:00] <roo00t> i searched but it gets me to dump command for db
[23:12:31] <Boomtime> testing what? it's a database, so you get out what you put in
[23:13:44] <Boomtime> you have bought an empty filing cabinet and you are asking what is the the content of the pages that are put in it - there is nothing in it, until you put it there
[23:14:49] <roo00t> okay
[23:14:57] <roo00t> got it
[23:35:10] <roo00t> i had problem installin mongodb on ubuntu 15.04
[23:35:40] <roo00t> https://www.mongodb.org/downloads#development still doesn't have a stable version for 15.04
[23:36:50] <roo00t> i was getting failed to fetch URL....
[23:37:27] <roo00t> is there n e other way install it on 15.04 that won't throw errors