[00:11:56] <timeturner> how do you guys deal with temp users (which need to confirm their accounts) and switching those accounts over to a full user when they do confirm?
[00:12:07] <timeturner> currently I just have two separate collections
[00:14:51] <svenstaro> looking up a single flag is rather fast
[00:14:58] <svenstaro> since you already have the user
[00:15:06] <svenstaro> but finding a user every time is a lot slower
[00:19:17] <timeturner> So when a user logs in with email and pass I would: query by id (which has a unique index) and return the whole doc (which I do anyway) and then check to see if the 'code' field exists and if it does then I'd tell the user that they haven't confirmed yet and if they don't have a 'code' field then I'd check their pass against the pass in the db (via hash check etc.) and then if that completes
[00:19:18] <timeturner> successfully I would shove all the relevant fields into their session on the server
[00:19:25] <timeturner> If I'm thinking about this correctly
[00:20:12] <timeturner> I would actually have to make the 'code' field indexed as well
[04:07:34] <dirn> timeturner: not sure if you're still around. I've been on and off IRC all day so I'm not sure, did you ever get an answer to your question?
[06:37:01] <NodeX> anyone alive who's adept with the aggregation framework
[07:44:24] <NodeX> I've got to say that the aggregation framework is not consistent
[07:45:29] <NodeX> I have 3 different queries, 2 that work with $match and give expected results and one that doesn't but if I run it without $match I get expected results, it's very strange
[07:53:59] <Gargoyle> Derick: I was wondering if xdebug could provide any insight into my apache segfault? I enabled debug level logging and coredumping, but the last segfault left no core dump, and nothing useful in the logs.
[07:54:31] <Gargoyle> I've got a copy of the server running under a vm, but I can't get the bloody thing to crash!
[07:59:03] <NodeX> the inconsistency comes from Mongo giving different results for different queries, in one of them it shouldnt even give a result but does
[07:59:37] <NodeX> All I can narrow it to is when Mongo can't find a key it is summing everything else in the pipeline and returning that
[07:59:48] <NodeX> (I assume it's intended behaviour)
[08:00:02] <NodeX> Derick : is it a simple pecl update?
[08:19:22] <jwilliams> https://jira.mongodb.org/browse/SERVER-4328 shows that db level lock is closed, but collection level locking ticke is still open.
[08:19:40] <jwilliams> does that mean mongodb 2.2 still use global lock?
[08:20:15] <jwilliams> or lock can be applied to collection w/t affecting other collections?
[08:21:20] <kali> jwilliams: in 2.2 the lock is database per database
[08:21:40] <kali> jwilliams: before 2.1/2.2, it was process wide
[08:22:06] <kali> jwilliams: so it's "one step small" and all that
[08:22:43] <kali> cmex: honestly i don't understand what you ask
[08:23:17] <cmex> kali we have 3 servers as said in tutorial about replica , the question is whats happened if master server is going down
[08:23:26] <cmex> the server im writing and reading from
[08:23:46] <cmex> is it something i need to do on aplication level or system level about it?
[08:24:00] <jwilliams> kali: what's the difference between lock on process wide and lock on database? or aren't prior to 2.1 the lock is always on database (global lock) already?
[08:24:00] <NodeX> an election takes place doesn;t it?
[08:24:14] <kali> cmex: the remaining host will elect a new primary, and your drivers will reconnect to the new master (after the first fail query)
[08:24:52] <cmex> what do u mean remaining host ... soorry for noob questions
[08:25:00] <kali> jwilliams: if you have two databases in your mongo, (as in "show db", "use my_database") they have separate locks
[08:25:51] <kali> cmex: when the primary goes down, the secondary elect a new primary among themselves
[08:26:59] <cmex> ok lets say i have 3 servers 212.x.x.1 , 212.x.x.2 , 212.x.x.3 . when 212.x.x.1 is master . how does 212.x.x.2 knows if something happens and he must to be the master , and became slave when master is going up
[08:27:43] <kali> cmex: first of all, the preferred terminology is primary and secondary, not master and slave (which are referring to the archaic mecanism)
[08:58:04] <Gargoyle> I've found the request causing it, and now to track it down to a bit of code.
[09:09:14] <NodeX> FInaly narrowed it down, it seems in certain queries Mongo want's sort in one place of the pipeline and in others it wants it ourside the pipeline
[09:10:15] <NodeX> it's totaly dependant on the query and further more the query will still execute both ways but one way will return a SUM of everything in one result and the other it will return (expeted) array of say pages viewed plus the counts
[09:18:41] <Derick> Gargoyle: if you could run it in gdb, that'd be awesome. Are you using apache or php-fpm?
[09:50:24] <NodeX> it seems it does cast now, but i remember a problem a long time ago where I was inserting ints and they ended up as strings or w/e then I would search them in the same code and they would not return unless I cast them on the insert and the query
[10:27:56] <Gargoyle> ppetermann: /opt of for optional software!
[10:28:04] <Zelest> then just symlink it to /home/<user>/www.domain.tld :-)
[10:29:03] <ppetermann> Gargoyle: /var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files.
[10:29:45] <ppetermann> Applications must generally not add directories to the top level of /var. Such directories should only be added if they have some system-wide implication, and in consultation with the FHS mailing list.
[10:29:46] <Gargoyle> ppetermann: At the moment, this web app IS variable data! ;p
[10:30:26] <ppetermann> to be completely correct it should be below /srv ;)
[10:30:59] <Gargoyle> I think I actually used /srv on a gentoo setup many moons ago!
[10:31:47] <NodeX> I use a totaly separate cryptolooped mountpoint inside /home
[10:32:29] <ppetermann> for what you serve through http?
[10:33:18] <Derick> Gargoyle: no errors with the export and valgrind either?
[10:37:17] <Gargoyle> Essentially, that is what the app is doing. When it crashes apache.
[10:38:15] <Gargoyle> We've switched to using update() and there is the odd call to save() not yet cleaned up - didn't think it would do much harm if an item was updated and then saved right away.
[10:40:36] <Derick> Gargoyle: i meant, what did you run on the CMD?
[11:34:37] <Derick> Zelest: different cipher chaining algorithms
[11:35:12] <Zelest> mhm, i found http://www.chilkatsoft.com/p/php_aes.asp which was useful to read
[11:41:44] <NodeX> Zelest, it's a very good idea to salt what you're encrypting too
[11:46:03] <Zelest> ok? correct me if I'm wrong, but the idea of salting is to make each set of encrypted data unique, right? for example, if two users use the same password, the two passwords encrypted should still be different, right?
[12:03:00] <NodeX> anyone have an idea why db.collection.aggregate([{"$match":{"ymd":20120904}},{"$group":{"_id":"$comment","total":{"$sum":1}}},{"$sort":{"total":-1}}]); gives me the total count but if I drop the $match is give me comments + count ?
[12:20:56] <NodeX> thanks, it's driving me insane why $match wont work on certain queries so I need to chekc i'm calling it correctly
[12:22:58] <Derick> NodeX: suggestions for improving this docs, please add a comment at https://jira.mongodb.org/browse/PHP-476
[12:24:47] <Derick> Gargoyle: the mongoid I don't have to redact, right?
[12:25:05] <Zelest> NodeX, mhm, I've just seen so many retarded solutions for salting where they use a static salt for all "rows" .. meaning it's just a little tricker to crack (as it's probably longer) but multiple "rows" still have the same encrypted data. :P
[12:27:52] <NodeX> I'll make some examples Derick because the examples are kind of hard to follow as they dont have sort and so on in them, once i have my head round it I'll add it all ;)
[12:36:41] <Zelest> if I generate a random string for salting and store that in the db.. or if I use the username for salting.. what difference does that make really? :P
[12:37:11] <Zelest> the idea, as far as I get it, is to protect the remaining data after one "row" is cracked.
[13:06:00] <luqasn> hi there, i got a very strange problem with mongodb not persisting all properties of the DBObject I pass to the java driver, does anybody have any pointer on how I could diagnose the issue?
[13:06:29] <luqasn> it looks like it just ignores some of the fields, which is very strange
[13:09:15] <Gargoyle> Sorry Derick, Was grabbing some lunch. What was that about the ID?
[13:12:47] <NodeX> luqasn : can you perhaps turn your query into json and pastebin it ?
[13:13:01] <NodeX> it sounds like you're using or not using $set
[13:17:03] <Lujeni> hello - i want upgrade my replica set. Should i upgrade arbiter at first and then mongod ?
[13:17:10] <NodeX> I must say this pipeline approach to aggregation is a genious idea
[13:21:44] <m4rtijn> im using mongoid 3 with ruby 1.9.3 - I have a Thread object with a Array of participant ids.. how would I search for a Thread which has all participant_ids I give it
[13:26:35] <luqasn> thx NodeX, you were right, I forgot that my worker was accessing the same database, but with outdated classes via a mapper and so deleted fields on save()
[14:42:18] <timeturner> how do I set a ttl on specific documents in a collection?
[14:42:21] <Gargoyle> { type: {$ne: 'Feature'}, legacy_IsBusinessType: 1 } gives 30 or so entries from the collection. I need to search the other few thousand by name.
[14:42:40] <timeturner> For example I want to remove a user that hasn't confirmed their account yet
[14:49:32] <timeturner> earlier I had two different collections, tempusers and users but then the atomic transactions across documents made it difficult so I decided to consolidate both to one users collection
[14:50:00] <NodeX> what are temp users in your situation?
[14:50:45] <timeturner> temp users are those who have just registered and have been sent a code to their email to confirm their account
[14:51:07] <timeturner> so the only difference between users and temp users (in terms of fields) is that tempusers have the code field
[14:51:19] <timeturner> and then I have to transfer them over to the users collection
[14:52:51] <timeturner> but now I don't know how to have the documents that are older than 24 hours automatically deleted without setting up another index on the code field and running through all of the docs without a cron job or something
[14:54:00] <NodeX> personaly what I would do if I did things in a temp way is store the user in redis/memcahce with an expiring key
[14:54:17] <NodeX> when the user comes to activate pull the data then insert iot
[15:39:25] <cmex> Derick:Wed Sep 05 18:15:36 [rsHealthPoll] replSet member xxx:27017 is now in state SECONDARY
[15:40:16] <cmex> Derick: and in rs.status its still recovering
[15:41:16] <cmex> Derick: last line in log : Wed Sep 05 18:40:40 [rsBackgroundSync] replSet not trying to sync from Master:27017, it is vetoed for 301 more seconds
[16:14:42] <Azoth> can I do manual find opereations in a secondary replica set ?
[16:16:06] <Azoth> for an unknow reasone (to me) can't do show collections in the secondary computer
[16:27:08] <enw> FWIW - Yesterday I had a bunch of questions related to auth issues querying from a ReplicaSet SECONDARY with no auth. The problem was.... the PRIMARY had auth enabled. No problem now that they're all noauth.
[16:42:01] <juanjosegzl> I have a doubt, is it possible to do batch upsert?
[17:17:26] <zanefactory> any tips for reducing replication lag? for some reason, i had a fail in my 3 node replica set, and the original primary is now a secondary and all hosts are reporting replication lag over 50k secs
[17:54:28] <Dr{Who}> a few times now I have had my secondary servers fall behind because the hardware is not as beefy as the primary but when this happens so far they have never recovered. They seem to fall behind and then ard not able to catch up even if all or most all traffic stops on the primary. Any ideas what could cause this?
[18:10:34] <R-66Y> is there a $size for associative arrays?
[18:10:44] <R-66Y> just to return how many keys an associative array has?
[18:11:51] <Dr{Who}> I recived a DR102 https://jira.mongodb.org/browse/SERVER-4890 but I am running 2.2.1-pre seems as if it was fixed but here it is at that point seems like replication stopped
[18:15:19] <Dr{Who}> looks like https://jira.mongodb.org/browse/SERVER-6816 is what happens after the DR102
[19:35:02] <eka> hi all... working with the new aggregation framework... it's possible to have nested arithmetic operations? like { $add: {1 , $add: { '$count', 1}}
[19:48:02] <crudson> eka: 1) try it and see what happens! 2) nesting $add doesn't make much sense; it can take an array of items, and any of which can refer to an existing attribute
[19:55:14] <Dr{Who}> E5-2690 e5-2650 or ?? for the best IO performance
[21:48:48] <Gavilan2> Can you put "documents" inside of other "documents"? Or to simulate that I need to put a document id inside of the first document?
[22:00:39] <wereHamster> Gavilan2: google 'mongodb embedded documents'
[23:36:15] <EvanCarroll> After using this JIRA, I honestly think RT was probably not that shitty.
[23:41:07] <sirpengi> EvanCarroll: never had to administer JIRA, but I've liked the frontend (though I've only used mongo's)
[23:43:17] <EvanCarroll> I've even filed an issue before, and I can't figure out how to file an issue.
[23:43:23] <EvanCarroll> That said, I don't half care.
[23:43:53] <EvanCarroll> It's another proprietary issue/bug tracker I just don't undrestand how so many of them mis the mark on usability
[23:44:05] <EvanCarroll> RT was extremly useable, it was just ugly as a horse's ass.
[23:44:16] <EvanCarroll> I think CPAN just had an extremely old version.
[23:45:00] <EvanCarroll> I could never figure out why there was no Markdown in the comments, or some method of indicating you were replying to other people., or why they chose to make replys re-open tickets, or a few other things. But, it was really simple to use.
[23:45:35] <EvanCarroll> I had an issue last week with Mongo where I wanted to file a suggestion on the Ubuntu packaging, it took me like 3 minutes to figure out how it was supposed to be sorted.