PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 4th of July, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:36:40] <Trinity> hi does anyone know how the node.js mongodb driver works for objectid generation? Specifically, I want to know whether or not ObjectID's last 4 bytes are incremental or randomly assigned
[05:42:27] <GothAlice> Trinity: https://docs.mongodb.com/manual/reference/bson-types/#objectid < ObjectId is a compact compound value in 4 parts, one of which is… pseudo-incremental.
[05:42:46] <GothAlice> It starts at a random point on each process init.
[05:42:52] <Trinity> GothAlice, right but it differs between drivers iirc
[05:43:03] <Trinity> either way, I looked into the source and node.js implements it through incremental
[05:43:08] <GothAlice> No, all drivers should generate them identically.
[05:43:17] <GothAlice> It always starts at zero?
[05:43:37] <Trinity> GothAlice, http://stackoverflow.com/a/5694803
[05:43:44] <Trinity> second paragraph
[05:44:09] <Trinity> GothAlice, from what I saw in the BSON script, the counter was being randomized and then incremented from there on
[05:44:17] <Trinity> looped on 0xFFFFFF
[05:45:11] <GothAlice> Aye, however, that's not by itself unique. Never mentioned as such; the whole ObjectId may be unique, and will, up to 16.78 million ObjectId generations within a single second.
[05:45:27] <GothAlice> (Within a single process on a single host.)
[05:45:52] <Trinity> GothAlice, huh?
[05:45:58] <Trinity> how does that relate to what i'm saying?
[05:48:32] <GothAlice> I already answered your initial question with my first statement: it's pseudo-incremental, starting at a random point, and followed it up with, effectively, "and it doesn't matter, because you'd still need to generate 16.78 million ObjectIds within a single process-node-second to generate a conflict." There's a timestamp value included, too.
[05:48:44] <GothAlice> See the linked SO answer's first point.
[05:48:59] <GothAlice> (First numbered point, that is.)
[05:49:34] <Trinity> hmm, you must've misunderstood then. I was questioning on whether or not all driver implementations follow a counter or if they used a random value instead for the last 3 bytes of the ObjectID
[05:50:26] <GothAlice> AFIK all modern drivers operate correctly. They've gotten quite good at unifying API approaches.
[05:52:11] <GothAlice> (I'm actually in the process of deprecating a wrapping layer I was using because the base driver has improved so much.) The Python 'bson' package, provided by pymongo, operates "correctly" (i.e., according to the spec, starting at a random point and incrementing and wrapping for the life of the process).
[05:53:27] <Trinity> GothAlice, ah, thanks :) I guess I was just concerned due to outdated data
[05:53:35] <GothAlice> And the most I've ever been able to insert in a single second was 1.9 million records, so not even close to actually overflowing. ;P
[05:53:55] <GothAlice> That answer is from 2011, yeah; it's pretty old.
[06:40:59] <sumi> hello
[10:07:52] <kurushiyama> sumi Hello!
[10:27:01] <krion> hello
[10:27:24] <krion> my replset is ok, but when i do a db.match.find on secondary i got error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
[10:27:27] <krion> is this normal ?
[10:28:34] <krion> http://stackoverflow.com/questions/8990158/mongodb-replicates-and-error-err-not-master-and-slaveok-false-code
[10:28:38] <krion> ty stackoverflow
[10:31:38] <kurushiyama> krion Stop
[10:32:40] <kurushiyama> krion It is important to understand a few things, first of them is that a replica set is not intended as a mean of loadbalancing and that it comes with some intricacies if you (mis)use them that way.
[10:33:41] <krion> hello kurushiyama ;)
[10:33:45] <kurushiyama> krion Hoi!
[10:33:58] <krion> Thank you for last time you help me.
[10:34:37] <kurushiyama> krion Well, did not take it to the end, but I read that you did a full resync, which in your situation was most likely the best thing to do.
[10:34:38] <krion> Once again it's my customer that want to use secondary as a readPreference on his php code.
[10:35:03] <krion> kurushiyama: yes, i used to do that almost everytime i need to reclaim disk space
[10:35:26] <krion> kurushiyama: i didn't use to turn of the balancer though, I will do it everytime now
[10:35:33] <kurushiyama> krion Well, he _can_ do so – but depending on his write concern, he might get outdated data from those queries.
[10:35:55] <krion> Yes, same as mysql so far ;)
[10:36:09] <kurushiyama> krion Turning off the balancer is more a precaution – minimize moving parts during maintenance.
[10:36:35] <kurushiyama> krion Iiirc, he already has a sharded cluster, no?
[10:37:34] <krion> This is another customer, no sharding for him iirc
[10:37:51] <krion> Yes, I confirm, no sharding for him.
[10:38:21] <kurushiyama> krion Ah, ok. Well, as long as he is aware of the fact that he might get outdated data...
[10:39:12] <kurushiyama> krion And I'd strongly advice against using a write concern equal to the number of data bearing nodes to make sure the data is indentical on all nodes at any given point in time.
[10:39:41] <krion> ok, thanks for the advice
[10:40:09] <kurushiyama> krion Do you understand why?
[10:41:01] <krion> I guess so, yes.
[10:41:44] <kurushiyama> krion Without meaning to patronize, rather mentoring: Why?
[10:44:36] <krion> To be sure all secondary ack the primary.
[10:44:52] <krion> (He got only two node btw, primary and secondary (and arbiter))
[10:46:03] <kurushiyama> Nope
[10:46:37] <kurushiyama> If he had a write concern of 2, and one of the data bearing nodes would fail, what would happen?
[10:49:30] <krion> Never acked ?
[10:50:33] <kurushiyama> krion Worse. Autofail of writes.
[12:27:36] <agrandiere> hi
[12:28:05] <agrandiere> I try to install mongodb on mac but I have an error, an idea ? https://gist.github.com/anonymous/ae3a3d4800cb99fe3357f70b076b7b3c
[12:36:24] <koren> agrandiere: create the directory first
[12:36:36] <agrandiere> it works
[12:36:53] <koren> mkdir -p /data/db
[12:36:54] <agrandiere> it was just a file not found
[12:36:58] <koren> ok
[12:36:59] <agrandiere> :)
[12:37:41] <agrandiere> It is possible to have a interface as mamp for mongodb or not?
[12:38:07] <agrandiere> It looks like you are trying to access MongoDB over HTTP on the native driver port.
[12:57:34] <kurushiyama> agrandiere Huh? What are you trying to do? And as of ease of install, you might want to use MacPorts or Homebrew.
[12:58:24] <agrandiere> I try to acces to my uri
[12:58:28] <agrandiere> like mongodb:// ect...
[12:58:37] <kurushiyama> agrandiere From your browser?
[12:58:43] <agrandiere> yes
[12:58:52] <kurushiyama> agrandiere That does not work.
[12:58:59] <agrandiere> oh
[12:59:03] <kurushiyama> agrandiere You need to use the shell
[12:59:12] <kurushiyama> agrandiere Or a driver for your programming language.
[12:59:24] <agrandiere> I follow a tutorial
[12:59:29] <agrandiere> I found this : 'database': 'mongodb://noder:noderauth&54;proximus.modulusmongo.net:27017/so9pojy
[12:59:35] <kurushiyama> agrandiere There _are_ GUI tools, but it is best if you learn the mongo shell first.
[12:59:44] <agrandiere> I search my uri
[12:59:45] <kurushiyama> agrandiere Follow the docs.
[13:01:19] <kurushiyama> agrandiere It is _essential_ to understand MongoDB first, before you do actual coding. This is true for _any_ technology, and for persistence technologies even more so.
[13:03:04] <agrandiere> yes I have found this https://docs.mongodb.com/manual/reference/connection-string/
[13:03:15] <agrandiere> But i have just http://localhost:27017/
[13:03:26] <agrandiere> I don't know what doing
[13:05:33] <kurushiyama> agrandiere use the shell
[13:05:42] <kurushiyama> You can not access mongodb via http
[13:07:14] <agrandiere> oh it works 'database': 'mongodb://localhost/20717'
[13:14:20] <therue> what kind of database should i use if my site allows users to upload mp3, kind those playlist sites noSQL(MongoDB) ? or will traditional SQL based database work?
[13:17:11] <kurushiyama> therue It will work, but MongoDB has a feature called GridFS which suits your needs most likely better.
[13:17:51] <kurushiyama> agrandiere it should be 'mongodb://localhost:27017
[13:18:10] <kurushiyama> agrandiere Since with what you have now, you create a database called "27017"
[13:19:53] <therue> kurushiyama: thanks, i'll look into it
[13:20:25] <kurushiyama> therue What do you want to do, and what language are you using for it?
[13:20:47] <therue> python
[13:21:02] <yopp> is there any official docker images for mms agent?
[13:21:04] <yopp> and backup agent?
[13:21:20] <therue> kind of like a playlist site where users can upload music
[13:22:09] <therue> like spotify, 8track, etc
[13:22:24] <therue> general idea is similar at least
[14:18:18] <agrandiere> my database object is undefined, an idea ? https://gist.github.com/anonymous/9dee4f78fcc5ce87afced5de819fc286
[15:28:59] <sSs> hi there
[15:29:27] <sSs> need help setting up a cluster of 3 mongo servers (1 primary and 2 secondary)
[15:30:06] <sSs> I've mongo1 and 2 setup individually, how can I "join" them?
[15:40:21] <GothAlice> sSs: https://docs.mongodb.com/manual/tutorial/deploy-replica-set/ < this tutorial should get you started
[15:58:48] <sSs> ty
[17:13:16] <masood09_> Hi team
[17:13:46] <masood09_> Is there any limit on number of tag ranges a collection can have in tag aware sharding
[17:14:00] <masood09_> I am looking @ 1,76000 ranges
[17:28:14] <Mmike> Hi, lads. When I delete a database from mongo, are the user grants for that database also deleted?
[17:28:38] <Mmike> I have database and I screwed something up so I want to restore from the dump - I want to delete the database, and then use mongorestore to just restore it
[17:39:07] <sweb> what's the best efficient file system for data dir of mongodb ?
[17:39:46] <sweb> ext4, btrfs or raw(if could be handled)
[17:41:53] <GothAlice> sweb: Unless you know how to configure some of the more esoteric btrfs options, I'd avoid it in favour of reiser or ext4 for MongoDB use.
[17:43:56] <Mmike> I'd avoid btrfs over all
[17:44:19] <Mmike> but for mongo, it's not good, btrfs is cow filesystem and mongo just doesn't like that
[17:45:10] <GothAlice> Well, yes. It's… finicky, not fully stabilized, and has some behaviours that utterly thrash database workloads. (Not to mention other uses, such as containers. It lacks page caching, handles small writes poorly, halves the performance of sequential writes, COW fragmentation, …)
[17:46:13] <GothAlice> sweb: Can't use a raw device (I've tried ;) due to a number of IO calls MongoDB makes (such as pre-allocation of stripes) which are impossible on a block device itself.
[17:47:16] <GothAlice> (COW = Copy On Write)
[17:59:39] <sweb> ty guys for guide ... seems be mongodb use journaling
[21:21:48] <_matix> hey guys, i have a bunch of dumped collections of the form {prefix}\{collectionname}.bson and {prefix}\{collectionname}.metadata.json
[21:22:46] <_matix> how do i import these? is the {prefix} the db name somehow? if so, how do i get mongorestore to interpret the prefix as the db name?
[21:22:55] <_matix> and if not, is it okay to just rename the collections?
[21:36:34] <GothAlice> _matix: The safest bet is to import the data with collection names as-is, then use a renameCollection command to re-name it to the final name. There is almost guaranteed to be metadata in the backup referencing the name it expects. (I'm not 100% on that, but it's the "safe bet".)
[21:37:06] <GothAlice> Now, that's for the collection bit, {collectionname} in your examples. The database name (containing folder) can be re-named as you wish; it's effectively unused when importing.
[21:37:16] <_matix> GothAlice: that makes sense
[21:37:17] <_matix> thanks
[21:37:57] <GothAlice> _matix: Just remember to specify -d <dbname> to restore to in the mongorestore command line. ;)
[21:38:08] <_matix> yup got that one!
[21:57:34] <_matix> GothAlice: how do i deal with the illegal '\' in the collection name? I can't reference it. Have tried db['prefix\collectionname'] and that didn't work
[21:58:01] <GothAlice> The backslash symbol is used for escaping other symbols. To use a literal backslash in a string, you need two: \\
[21:58:19] <GothAlice> Second: never let that happen again. ;P Collection names should be "variable literal safe".
[21:58:44] <GothAlice> I.e. matching [a-zA-Z][a-zA-Z0-9]*
[21:58:59] <_matix> lol yeah someone sent me this dump
[21:59:04] <GothAlice> Matching [a-zA-Z][a-zA-Z0-9_-]* rather.
[21:59:10] <_matix> i have a feeling that the prefix is actually the db name
[21:59:15] <_matix> not sure how it got added to the collection name
[21:59:40] <GothAlice> _matix: Ignoring credentials (user/pass, if used), what was the exact mongorestore line you used?
[22:00:10] <GothAlice> You may have specified the parent of the containing directory, instead of the directory containing the .bson and .json files.
[22:00:50] <_matix> mongorestore -d dbname path/to/folder/
[22:01:52] <_matix> thanks for your help