[04:36:40] <Trinity> hi does anyone know how the node.js mongodb driver works for objectid generation? Specifically, I want to know whether or not ObjectID's last 4 bytes are incremental or randomly assigned
[05:42:27] <GothAlice> Trinity: https://docs.mongodb.com/manual/reference/bson-types/#objectid < ObjectId is a compact compound value in 4 parts, one of which is… pseudo-incremental.
[05:42:46] <GothAlice> It starts at a random point on each process init.
[05:42:52] <Trinity> GothAlice, right but it differs between drivers iirc
[05:43:03] <Trinity> either way, I looked into the source and node.js implements it through incremental
[05:43:08] <GothAlice> No, all drivers should generate them identically.
[05:45:11] <GothAlice> Aye, however, that's not by itself unique. Never mentioned as such; the whole ObjectId may be unique, and will, up to 16.78 million ObjectId generations within a single second.
[05:45:27] <GothAlice> (Within a single process on a single host.)
[05:45:58] <Trinity> how does that relate to what i'm saying?
[05:48:32] <GothAlice> I already answered your initial question with my first statement: it's pseudo-incremental, starting at a random point, and followed it up with, effectively, "and it doesn't matter, because you'd still need to generate 16.78 million ObjectIds within a single process-node-second to generate a conflict." There's a timestamp value included, too.
[05:48:44] <GothAlice> See the linked SO answer's first point.
[05:48:59] <GothAlice> (First numbered point, that is.)
[05:49:34] <Trinity> hmm, you must've misunderstood then. I was questioning on whether or not all driver implementations follow a counter or if they used a random value instead for the last 3 bytes of the ObjectID
[05:50:26] <GothAlice> AFIK all modern drivers operate correctly. They've gotten quite good at unifying API approaches.
[05:52:11] <GothAlice> (I'm actually in the process of deprecating a wrapping layer I was using because the base driver has improved so much.) The Python 'bson' package, provided by pymongo, operates "correctly" (i.e., according to the spec, starting at a random point and incrementing and wrapping for the life of the process).
[05:53:27] <Trinity> GothAlice, ah, thanks :) I guess I was just concerned due to outdated data
[05:53:35] <GothAlice> And the most I've ever been able to insert in a single second was 1.9 million records, so not even close to actually overflowing. ;P
[05:53:55] <GothAlice> That answer is from 2011, yeah; it's pretty old.
[10:27:24] <krion> my replset is ok, but when i do a db.match.find on secondary i got error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
[10:32:40] <kurushiyama> krion It is important to understand a few things, first of them is that a replica set is not intended as a mean of loadbalancing and that it comes with some intricacies if you (mis)use them that way.
[10:33:58] <krion> Thank you for last time you help me.
[10:34:37] <kurushiyama> krion Well, did not take it to the end, but I read that you did a full resync, which in your situation was most likely the best thing to do.
[10:34:38] <krion> Once again it's my customer that want to use secondary as a readPreference on his php code.
[10:35:03] <krion> kurushiyama: yes, i used to do that almost everytime i need to reclaim disk space
[10:35:26] <krion> kurushiyama: i didn't use to turn of the balancer though, I will do it everytime now
[10:35:33] <kurushiyama> krion Well, he _can_ do so – but depending on his write concern, he might get outdated data from those queries.
[10:36:09] <kurushiyama> krion Turning off the balancer is more a precaution – minimize moving parts during maintenance.
[10:36:35] <kurushiyama> krion Iiirc, he already has a sharded cluster, no?
[10:37:34] <krion> This is another customer, no sharding for him iirc
[10:37:51] <krion> Yes, I confirm, no sharding for him.
[10:38:21] <kurushiyama> krion Ah, ok. Well, as long as he is aware of the fact that he might get outdated data...
[10:39:12] <kurushiyama> krion And I'd strongly advice against using a write concern equal to the number of data bearing nodes to make sure the data is indentical on all nodes at any given point in time.
[12:28:05] <agrandiere> I try to install mongodb on mac but I have an error, an idea ? https://gist.github.com/anonymous/ae3a3d4800cb99fe3357f70b076b7b3c
[12:36:24] <koren> agrandiere: create the directory first
[12:59:45] <kurushiyama> agrandiere Follow the docs.
[13:01:19] <kurushiyama> agrandiere It is _essential_ to understand MongoDB first, before you do actual coding. This is true for _any_ technology, and for persistence technologies even more so.
[13:03:04] <agrandiere> yes I have found this https://docs.mongodb.com/manual/reference/connection-string/
[13:03:15] <agrandiere> But i have just http://localhost:27017/
[13:05:42] <kurushiyama> You can not access mongodb via http
[13:07:14] <agrandiere> oh it works 'database': 'mongodb://localhost/20717'
[13:14:20] <therue> what kind of database should i use if my site allows users to upload mp3, kind those playlist sites noSQL(MongoDB) ? or will traditional SQL based database work?
[13:17:11] <kurushiyama> therue It will work, but MongoDB has a feature called GridFS which suits your needs most likely better.
[13:17:51] <kurushiyama> agrandiere it should be 'mongodb://localhost:27017
[13:18:10] <kurushiyama> agrandiere Since with what you have now, you create a database called "27017"
[13:19:53] <therue> kurushiyama: thanks, i'll look into it
[13:20:25] <kurushiyama> therue What do you want to do, and what language are you using for it?
[17:13:46] <masood09_> Is there any limit on number of tag ranges a collection can have in tag aware sharding
[17:14:00] <masood09_> I am looking @ 1,76000 ranges
[17:28:14] <Mmike> Hi, lads. When I delete a database from mongo, are the user grants for that database also deleted?
[17:28:38] <Mmike> I have database and I screwed something up so I want to restore from the dump - I want to delete the database, and then use mongorestore to just restore it
[17:39:07] <sweb> what's the best efficient file system for data dir of mongodb ?
[17:39:46] <sweb> ext4, btrfs or raw(if could be handled)
[17:41:53] <GothAlice> sweb: Unless you know how to configure some of the more esoteric btrfs options, I'd avoid it in favour of reiser or ext4 for MongoDB use.
[17:44:19] <Mmike> but for mongo, it's not good, btrfs is cow filesystem and mongo just doesn't like that
[17:45:10] <GothAlice> Well, yes. It's… finicky, not fully stabilized, and has some behaviours that utterly thrash database workloads. (Not to mention other uses, such as containers. It lacks page caching, handles small writes poorly, halves the performance of sequential writes, COW fragmentation, …)
[17:46:13] <GothAlice> sweb: Can't use a raw device (I've tried ;) due to a number of IO calls MongoDB makes (such as pre-allocation of stripes) which are impossible on a block device itself.
[17:59:39] <sweb> ty guys for guide ... seems be mongodb use journaling
[21:21:48] <_matix> hey guys, i have a bunch of dumped collections of the form {prefix}\{collectionname}.bson and {prefix}\{collectionname}.metadata.json
[21:22:46] <_matix> how do i import these? is the {prefix} the db name somehow? if so, how do i get mongorestore to interpret the prefix as the db name?
[21:22:55] <_matix> and if not, is it okay to just rename the collections?
[21:36:34] <GothAlice> _matix: The safest bet is to import the data with collection names as-is, then use a renameCollection command to re-name it to the final name. There is almost guaranteed to be metadata in the backup referencing the name it expects. (I'm not 100% on that, but it's the "safe bet".)
[21:37:06] <GothAlice> Now, that's for the collection bit, {collectionname} in your examples. The database name (containing folder) can be re-named as you wish; it's effectively unused when importing.
[21:57:34] <_matix> GothAlice: how do i deal with the illegal '\' in the collection name? I can't reference it. Have tried db['prefix\collectionname'] and that didn't work
[21:58:01] <GothAlice> The backslash symbol is used for escaping other symbols. To use a literal backslash in a string, you need two: \\
[21:58:19] <GothAlice> Second: never let that happen again. ;P Collection names should be "variable literal safe".
[21:58:44] <GothAlice> I.e. matching [a-zA-Z][a-zA-Z0-9]*
[21:58:59] <_matix> lol yeah someone sent me this dump