PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 10th of October, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:18:32] <choke> seems like it, i'm reading the page you gave me -- and trying to figure this out... so hopefully i'll get it working
[00:49:46] <choke> Got the match working Boomtime -- attempting to get the sort working... json being: https://gist.github.com/ch0ke/32d0fcbe6ff31a8cc432 i know its wrong 'cause it tells me it is, just not certain what is wrong ( maybe i'm doing it completely wrong.... )
[00:54:43] <Boomtime> choke: that looks fine... what does it say is wrong?
[00:54:59] <Boomtime> also, are you testing this in the mongod shell? (i recommend this)
[00:55:06] <Boomtime> mongo shell
[00:56:00] <Boomtime> ok, i commented on your previous gist
[00:56:54] <Boomtime> https://gist.github.com/ch0ke/68cc2d0a38366c0ee38e
[00:58:26] <choke> nah im running it in php... it tells me, A pipeline stage specification object must contain exactly one field. so i'm trying something else i just googled right now
[01:23:24] <choke> closer... when i add in the group that you put up in my php, it errors saying: FieldPath field names may not start with '$'
[01:28:11] <Boomtime> choke: what versions of things are you using?
[01:28:15] <Boomtime> php driver and mongodb
[01:29:25] <choke> php is 5.5, mongodb not sure... just run mongo -v if so, then it's 2.4 version set, where the $$ROOT was added in for 2.6
[01:29:40] <Boomtime> yep, i wondered if that was it
[01:29:53] <Boomtime> nevermind, select the fields you want instead of using the $$ROOT
[01:33:44] <choke> or just upgrade to mongo 2.6 and viola
[01:33:52] <Boomtime> :D
[01:34:10] <choke> works a charm.. thank you very much
[05:18:07] <jenkinsprobe> hello
[05:18:55] <jenkinsprobe> Why in mongodb the createUser() function allows you to set roles for a specific db but also there is a difference in which db you run the function?
[05:23:42] <Boomtime> the database where you create the user is the database the user will authenticate against - that is not necessarily related to the authorizations granted as a result
[05:31:19] <jenkinsprobe> Why is there the option to authenticate via so many databases? why not just stick to a single authentication channel? having to track which users authenticate through which DBs sounds messy
[05:33:11] <Boomtime> if you wanted to silo your user base you might not think so
[05:33:24] <Boomtime> why don't you just auth against the same database everytime and ignore this feature?
[05:51:30] <jenkinsprobe> I can, I'm just curious as to why the system was designed like that
[05:51:46] <jenkinsprobe> and what do you mean by "silo you user base"?
[06:01:26] <Boomtime> the system was designed like that because all respectable credential systems have namespaces - this is true of all enterprise databases
[06:08:18] <jenkinsprobe> @Boomtime, if you take MySQL for instance--The authentication mechanism isn't tied to a namespace. You create a user that has privileges to which DBs and can connect from what host, there isn't a namespace per se.
[06:09:24] <jenkinsprobe> What I want to understand is in what scenario would it be useful to authenticate a user based on different DBs rather than a global auth channel?
[06:09:50] <Boomtime> when you have different groups of users who must not collide
[06:10:31] <jenkinsprobe> what collisions are you speaking of?
[06:10:44] <Boomtime> users
[06:10:57] <jenkinsprobe> I meant, why would there be a collision?
[06:11:08] <Boomtime> from a username
[06:11:46] <Boomtime> without namespaces, usernames would have to be universally unique
[06:12:05] <jenkinsprobe> I see, so what you're saying the only scenario is if I want to have two users with the same name on the system but segregated by the auth db
[06:12:13] <Boomtime> that's true on a single IRC server and it's annoying here, let alone in a distributed database
[06:12:40] <Boomtime> namespaces let you scale as widely as you like
[06:13:05] <jenkinsprobe> okay i see.
[06:13:07] <Boomtime> without namespaces you have to assign every thing in that one global namesapce a unique ID
[06:13:23] <Boomtime> you are just seeing the ramifications of that one rule applied to authentication
[06:14:16] <Boomtime> so for a small price to the little guy like you who sees no need for the mechanism, the feature permits massive scale
[06:14:23] <jenkinsprobe> Okay. I'm just confused because I don't know what scenario where I would need that many users on a DB system to the point where names will collide
[06:15:17] <Boomtime> it is only after you experience the lack of a feature that you can understand the need for that feature
[06:16:10] <jenkinsprobe> Yep
[06:48:53] <jonjon> so I guess this is not possible? db.users.update({"name_lower":"bbbb"}, {"$addToSet": {"set1":"value1"}, "$addToSet": {"set2":"value2"}}, {upsert:true})
[06:48:59] <jonjon> its only adding to set2 and set1 is ignored
[06:53:00] <joannac> "$addToSet": {"set1":"value1", "set2": "value2"}
[06:53:03] <joannac> jonjon: ^^
[06:53:15] <jonjon> ohhhhh :)
[06:53:21] <jonjon> thank you
[07:50:24] <yacc> Question, how can I transfer (from a file based backup) a database from mongodb server to another?
[07:51:03] <yacc> Is it enough to copy dbname.* from /var/lib/mongodb to the new server? (Assuming btw exact the same version of mongodb)
[09:23:30] <mr-wildcard> hi
[13:14:59] <juuaosdasd> Hi all, Is it possible to execute sha256 function in mongodb? I need to do a bulk update and execute this function
[13:15:47] <joannac> no
[13:17:57] <joannac> juuaosdasd: hash before inserting into mongodb?
[13:18:00] <juuaosdasd> thanks @joannac. I suppose to do that outside mongodb :(
[13:18:43] <juuaosdasd> the problem is that I had data within a collection that I have to calculate sha256
[13:19:14] <juuaosdasd> we migrated to mongodb2.6 and it has index limitations
[13:19:30] <juuaosdasd> so, we discovered problem with a compound index
[13:20:02] <juuaosdasd> the solution I'm following is to apply sha256 to the long fields
[13:23:57] <lfamorim> Hello! Someone know why this update takes so long? http://pastebin.com/prTYGX96
[13:24:16] <lfamorim> I have to create a BG index instead of a non-bg?
[13:56:38] <PirosB3> Hi all
[13:56:49] <PirosB3> if I have a collection with documents
[13:57:04] <PirosB3> {city: x, store: y, timestamp: xxxx}
[13:57:46] <PirosB3> how can I find the delta of every timestamp, aggregating by city + store
[13:58:00] <PirosB3> so, something as:
[13:58:33] <PirosB3> {$group: {_id: {city: “$city”, store: “$store”}, delta_ts: {$delta: ‘$timestamp’}}}
[13:58:38] <PirosB3> is anything like this possible?
[14:31:12] <wayne> is there any easier way to deal with dates?
[14:31:26] <wayne> frankly, the javascript built-in date object is horrible
[16:25:35] <Chepra> Hey, whats the supposed to do a repairDatabase on replica-set slave?
[17:04:16] <doxavore> is a DB repair the only way to reclaim disk space? mongodb doesn't seem to be reusing *any* free space in my gridfs cluster after files have been deleted.
[17:10:13] <GothAlice> doxavore: You can also compact space by transferring the records from one collection to another, then rename the two collections. I think there are some other strategies, too.
[17:10:38] <GothAlice> I.e. a copyDatabase followed by two renames.
[17:11:23] <GothAlice> doxavore: Scratch all that. http://docs.mongodb.org/manual/reference/command/compact/
[17:14:10] <doxavore> so i shouldn't expect mongo to reuse any space until an entire extent is free (db.stats() shows extendFreeList somewhere)? otherwise it keeps tacking data onto the end?
[17:14:21] <GothAlice> Basically, yes.
[17:14:51] <doxavore> i think I understand the compact vs repair - compact will give me those free extents at the end by defragging, not the free disk space... repair will give me both
[17:14:53] <GothAlice> I believe it'll make use of holes freed by deletions, but I may be wrong. I know that growing a record beyond the padding factor will force it to be re-appended to the end of the stripe, though.
[17:15:21] <GothAlice> stripe/extent
[17:15:24] <doxavore> my issue is with gridfs specifically, on immutable files kepts around for a few weeks and then deleted.
[17:16:24] <GothAlice> Are those files initially added in batches? If so, it may be worthwhile to partition them into separate collections. Cleanup turns into a dropCollection, then.
[17:17:26] <doxavore> it's a pretty constant trickle in
[17:17:46] <GothAlice> In my own use of GridFS I keep my "metadata" in a separate collection with references into one of several GridFS collections.
[17:18:10] <doxavore> i'm already partitioning across about 10 DBs for write throughput during peak times, i'm okay with using 500GB or so per DB, but it should level off somewhere around there and it doesn't seem to
[17:18:18] <GothAlice> Hmm; that's rather unfortunate then. You'll end up with some pretty large dead space extents.
[17:18:48] <doxavore> Guess I just put everyone on a compact or repair schedule :) thanks.
[17:19:53] <GothAlice> Heh; I've got 24 TiB of data in MongoDB and GridFS and have never had to compact or repair… then again, nothing has ever been deleted from the dataset. ^_^
[20:08:58] <hydrajump> hi I'm trying to get the 2.4.11 rpm and any dependencies like this:
[20:09:00] <hydrajump> sudo yum install --downloadonly --downloaddir=./ mongo-10gen-2.4.11 mongo-10gen-server-2.4.11 --exclude mongodb-org,mongodb-org-server
[20:09:07] <hydrajump> because I want to install offline on another server
[20:09:30] <hydrajump> but it's only downloading mongo-10gen-server-2.4.11-mongodb_1.x86_64.rpm and mongo-10gen-2.4.11-mongodb_1.x86_64.rpm
[20:09:37] <hydrajump> and not the dependencies
[20:31:02] <ehershey> hydrajump: I think you have to use that other yum tool
[20:31:05] <ehershey> yumdownloader
[20:31:26] <ehershey> it has a parm for including dependencies I think
[20:31:35] <ehershey> I don't know about yum install --downloadonly
[20:31:39] <ehershey> if it can do what you want
[20:32:59] <_newb> what's the most efficient way to store an IP to mongodb for indexing/querying?
[20:44:25] <_newb> what's the most efficient way to store an IP to mongodb for indexing/querying?
[20:52:08] <GothAlice> _newb: Convert the IPv4 address into its numerical counterpart. Turns into a 32-bit integer.
[20:52:27] <_newb> GothAlice: same for IPv6 ?
[20:52:30] <Derick> careful though, as MongoDB stores signed integers
[20:52:34] <GothAlice> IPv6 turns into a 128-bit integer.
[20:53:34] <GothAlice> I don't think MongoDB supports bigints…
[20:53:37] <_newb> i'm guessing i can't store a packed IP
[20:53:59] <GothAlice> You could store the binary representation of the integers yourself; not sure how that would impact indexing.
[20:54:19] <Derick> nope, 64 bits is max
[20:54:32] <_newb> GothAlice: i was looking at the inet_pton() function, but i don't know how that would impact indexing either.
[20:54:46] <_newb> ...and i doubt it's queryable
[20:54:53] <Derick> bindata is
[20:55:01] <Derick> it just sorts "strange" (first on length)
[20:55:04] <GothAlice> Derick: Any plans for 128-bit or greater bigint support in the future?
[20:55:16] <Derick> not that I'm aware off
[20:55:48] <GothAlice> Rats. I'll have to be satisfied when better bitfield support comes out, instead. ;)
[20:55:53] <_newb> GothAlice: whoa, i CAN store packed IPs~! ... you sure this is standard practice? or am i better with ip2long() ?
[20:56:45] <GothAlice> _newb: Storing as bindata should work quite well for you, actually. You could also (and this gets into the realm of "how do you want to *use* these IPs") store them as arrays of integers.
[20:56:50] <GothAlice> I.e. [127, 0, 0, 1]
[20:56:57] <GothAlice> That'd let you do some fancy things with the data.
[20:57:10] <GothAlice> (I.e. subnet searches.)
[20:57:41] <_newb> GothAlice: i don't really need to do anything too freaky right now, just checking access
[20:57:42] <Derick> GothAlice: difficult to search for at /27 though
[20:58:12] <Derick> _newb: if you want straight equality matches only, Bindata will work just fine
[20:58:19] <GothAlice> Derick: *coughbitfieldscough* Still waiting on that darned ticket. (SERVER-3518)
[20:58:26] <_newb> GothAlice: so i'm still sorta leaning toward the ip2long() recommendation
[20:59:04] <Derick> GothAlice: you're not getting a disagreement from me there :-) Plenty of other things to do too though
[20:59:30] <GothAlice> _newb: Won't work for IPv6, but storing as an integer is the usual approach, where possible.
[21:02:50] <GothAlice> Derick: And hey, you could do a /27, you'd just have to calculate the allowable choices and use $in. Less good, but still acceptable. ;)
[21:03:00] <_newb> GothAlice: do you know if $_SERVER['REMOTE_ADDR'] ever returns an IPv6?
[21:03:21] <GothAlice> _newb: In PHP? No idea. Probably, yes, if the initial request came over IPv6.
[21:03:35] <_newb> GothAlice: carp.
[21:04:00] <GothAlice> My own framework does expose IPv6 addresses through the WSGI (CGI) environment if used.
[21:04:54] <_newb> GothAlice: i believe all mobile requests are IPv6
[21:05:20] <GothAlice> _newb: No.
[21:05:31] <_newb> GothAlice: at least on my version of apache
[21:05:43] <Derick> that... makes little sense
[21:05:56] <GothAlice> My phone has no IPv6 capability over LTE.
[21:06:20] <GothAlice> (whatismyv6.com fails when running mobile, for me)
[21:06:52] <GothAlice> _newb: Ah! I think I know what you're seeing.
[21:07:32] <GothAlice> _newb: ::ffff:192.168.0.1
[21:07:44] <GothAlice> Do these supposed IPv6 addresses start with ::ffff:?
[21:08:50] <GothAlice> (If so, those are just IPv4 addresses encapsulated in IPv6 encoding.)
[21:09:08] <_newb> umYES
[21:09:12] <GothAlice> :)
[21:09:16] <_newb> oh.
[21:09:32] <_newb> yea, i'ma trim that off, and work with 32bit tyvm. :P
[22:15:17] <hydrajump> ehershey: I tried but it's still trying to resolve dependencies online which I don't get