[00:18:32] <choke> seems like it, i'm reading the page you gave me -- and trying to figure this out... so hopefully i'll get it working
[00:49:46] <choke> Got the match working Boomtime -- attempting to get the sort working... json being: https://gist.github.com/ch0ke/32d0fcbe6ff31a8cc432 i know its wrong 'cause it tells me it is, just not certain what is wrong ( maybe i'm doing it completely wrong.... )
[00:54:43] <Boomtime> choke: that looks fine... what does it say is wrong?
[00:54:59] <Boomtime> also, are you testing this in the mongod shell? (i recommend this)
[00:58:26] <choke> nah im running it in php... it tells me, A pipeline stage specification object must contain exactly one field. so i'm trying something else i just googled right now
[01:23:24] <choke> closer... when i add in the group that you put up in my php, it errors saying: FieldPath field names may not start with '$'
[01:28:11] <Boomtime> choke: what versions of things are you using?
[05:18:55] <jenkinsprobe> Why in mongodb the createUser() function allows you to set roles for a specific db but also there is a difference in which db you run the function?
[05:23:42] <Boomtime> the database where you create the user is the database the user will authenticate against - that is not necessarily related to the authorizations granted as a result
[05:31:19] <jenkinsprobe> Why is there the option to authenticate via so many databases? why not just stick to a single authentication channel? having to track which users authenticate through which DBs sounds messy
[05:33:11] <Boomtime> if you wanted to silo your user base you might not think so
[05:33:24] <Boomtime> why don't you just auth against the same database everytime and ignore this feature?
[05:51:30] <jenkinsprobe> I can, I'm just curious as to why the system was designed like that
[05:51:46] <jenkinsprobe> and what do you mean by "silo you user base"?
[06:01:26] <Boomtime> the system was designed like that because all respectable credential systems have namespaces - this is true of all enterprise databases
[06:08:18] <jenkinsprobe> @Boomtime, if you take MySQL for instance--The authentication mechanism isn't tied to a namespace. You create a user that has privileges to which DBs and can connect from what host, there isn't a namespace per se.
[06:09:24] <jenkinsprobe> What I want to understand is in what scenario would it be useful to authenticate a user based on different DBs rather than a global auth channel?
[06:09:50] <Boomtime> when you have different groups of users who must not collide
[06:10:31] <jenkinsprobe> what collisions are you speaking of?
[06:11:46] <Boomtime> without namespaces, usernames would have to be universally unique
[06:12:05] <jenkinsprobe> I see, so what you're saying the only scenario is if I want to have two users with the same name on the system but segregated by the auth db
[06:12:13] <Boomtime> that's true on a single IRC server and it's annoying here, let alone in a distributed database
[06:12:40] <Boomtime> namespaces let you scale as widely as you like
[06:13:07] <Boomtime> without namespaces you have to assign every thing in that one global namesapce a unique ID
[06:13:23] <Boomtime> you are just seeing the ramifications of that one rule applied to authentication
[06:14:16] <Boomtime> so for a small price to the little guy like you who sees no need for the mechanism, the feature permits massive scale
[06:14:23] <jenkinsprobe> Okay. I'm just confused because I don't know what scenario where I would need that many users on a DB system to the point where names will collide
[06:15:17] <Boomtime> it is only after you experience the lack of a feature that you can understand the need for that feature
[06:48:53] <jonjon> so I guess this is not possible? db.users.update({"name_lower":"bbbb"}, {"$addToSet": {"set1":"value1"}, "$addToSet": {"set2":"value2"}}, {upsert:true})
[06:48:59] <jonjon> its only adding to set2 and set1 is ignored
[13:58:38] <PirosB3> is anything like this possible?
[14:31:12] <wayne> is there any easier way to deal with dates?
[14:31:26] <wayne> frankly, the javascript built-in date object is horrible
[16:25:35] <Chepra> Hey, whats the supposed to do a repairDatabase on replica-set slave?
[17:04:16] <doxavore> is a DB repair the only way to reclaim disk space? mongodb doesn't seem to be reusing *any* free space in my gridfs cluster after files have been deleted.
[17:10:13] <GothAlice> doxavore: You can also compact space by transferring the records from one collection to another, then rename the two collections. I think there are some other strategies, too.
[17:10:38] <GothAlice> I.e. a copyDatabase followed by two renames.
[17:11:23] <GothAlice> doxavore: Scratch all that. http://docs.mongodb.org/manual/reference/command/compact/
[17:14:10] <doxavore> so i shouldn't expect mongo to reuse any space until an entire extent is free (db.stats() shows extendFreeList somewhere)? otherwise it keeps tacking data onto the end?
[17:14:51] <doxavore> i think I understand the compact vs repair - compact will give me those free extents at the end by defragging, not the free disk space... repair will give me both
[17:14:53] <GothAlice> I believe it'll make use of holes freed by deletions, but I may be wrong. I know that growing a record beyond the padding factor will force it to be re-appended to the end of the stripe, though.
[17:15:24] <doxavore> my issue is with gridfs specifically, on immutable files kepts around for a few weeks and then deleted.
[17:16:24] <GothAlice> Are those files initially added in batches? If so, it may be worthwhile to partition them into separate collections. Cleanup turns into a dropCollection, then.
[17:17:26] <doxavore> it's a pretty constant trickle in
[17:17:46] <GothAlice> In my own use of GridFS I keep my "metadata" in a separate collection with references into one of several GridFS collections.
[17:18:10] <doxavore> i'm already partitioning across about 10 DBs for write throughput during peak times, i'm okay with using 500GB or so per DB, but it should level off somewhere around there and it doesn't seem to
[17:18:18] <GothAlice> Hmm; that's rather unfortunate then. You'll end up with some pretty large dead space extents.
[17:18:48] <doxavore> Guess I just put everyone on a compact or repair schedule :) thanks.
[17:19:53] <GothAlice> Heh; I've got 24 TiB of data in MongoDB and GridFS and have never had to compact or repair… then again, nothing has ever been deleted from the dataset. ^_^
[20:08:58] <hydrajump> hi I'm trying to get the 2.4.11 rpm and any dependencies like this:
[20:55:48] <GothAlice> Rats. I'll have to be satisfied when better bitfield support comes out, instead. ;)
[20:55:53] <_newb> GothAlice: whoa, i CAN store packed IPs~! ... you sure this is standard practice? or am i better with ip2long() ?
[20:56:45] <GothAlice> _newb: Storing as bindata should work quite well for you, actually. You could also (and this gets into the realm of "how do you want to *use* these IPs") store them as arrays of integers.
[20:57:41] <_newb> GothAlice: i don't really need to do anything too freaky right now, just checking access
[20:57:42] <Derick> GothAlice: difficult to search for at /27 though
[20:58:12] <Derick> _newb: if you want straight equality matches only, Bindata will work just fine
[20:58:19] <GothAlice> Derick: *coughbitfieldscough* Still waiting on that darned ticket. (SERVER-3518)
[20:58:26] <_newb> GothAlice: so i'm still sorta leaning toward the ip2long() recommendation
[20:59:04] <Derick> GothAlice: you're not getting a disagreement from me there :-) Plenty of other things to do too though
[20:59:30] <GothAlice> _newb: Won't work for IPv6, but storing as an integer is the usual approach, where possible.
[21:02:50] <GothAlice> Derick: And hey, you could do a /27, you'd just have to calculate the allowable choices and use $in. Less good, but still acceptable. ;)
[21:03:00] <_newb> GothAlice: do you know if $_SERVER['REMOTE_ADDR'] ever returns an IPv6?
[21:03:21] <GothAlice> _newb: In PHP? No idea. Probably, yes, if the initial request came over IPv6.