PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 10th of February, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:45:50] <smoke_> hi guys im migrating from mysql to mongo db, and was wondering on my website for user auth should i have a seperate user database for mongo or should i have the user credentials included in the main db?
[02:19:33] <morenoh149> how do I insert an array of json objects into a collection?
[02:19:52] <morenoh149> I have [{foo: foo},{bar: bar}]
[02:20:02] <morenoh149> I want them both to be in collecion foo
[02:20:30] <morenoh149> not one document with an array but two doucments
[02:51:57] <dimon222> morenoh149, you have to iterate through them and do insert element-by-element or collect a set of insert statements and execute bulk set
[02:52:21] <dimon222> / bulk set is faster
[02:53:06] <dimon222> example - http://docs.mongodb.org/manual/reference/method/Bulk.insert/#Bulk.insert
[03:00:56] <morenoh149> dimon222: can't I do db.foos.insert(foosArray)
[03:01:07] <dimon222> unfortunately no
[03:01:33] <morenoh149> http://docs.mongodb.org/manual/reference/method/db.collection.insert/#insert-multiple-documents
[03:01:45] <dimon222> actually it says http://docs.mongodb.org/manual/reference/method/db.collection.insert/
[03:01:55] <dimon222> you can try, but i'm not really sure in that
[03:03:29] <morenoh149> -.- I'm right
[03:04:16] <dimon222> the problem is that it may not be able to accept simple array
[03:04:31] <dimon222> because insert statement requires specific syntax
[03:04:42] <dimon222> so basically db.collection.insert([{docNumber: 1},{docNumber: 2}])
[03:05:00] <joannac> ?
[03:05:01] <dimon222> i'm not sure how to achieve such syntax with array in one command or whatever
[03:05:29] <morenoh149> the limitation is that the mongo shell cant store the array in a temp variable. but it's straightfoward
[03:05:30] <joannac> morenoh149: as long as each document is your array is a valid JSON document, you're fine
[03:05:48] <joannac> each document in your array*
[03:05:51] <morenoh149> oh wait yes it can
[03:05:54] <morenoh149> :)
[03:06:18] <dimon222> array in temp variable yeah, its different case
[03:13:04] <morenoh149> dimon222: parse error ☝️
[03:14:49] <dimon222> hm, did u pass string or actual object? i think it should be object
[03:21:38] <morenoh149> nvm it was a joke
[03:45:18] <dimon222> :O
[04:32:21] <morenoh149> wait if in an aggregation. the order you provide limit and skip matters?
[04:33:50] <joannac> no?
[04:34:16] <joannac> http://docs.mongodb.org/manual/core/aggregation-pipeline-optimization/#skip-limit-sequence-optimization
[04:39:48] <morenoh149> joannac: https://www.youtube.com/watch?v=joRw-fqCIWA confusing explanation then
[04:45:52] <joannac> morenoh149: how is that confusing?
[04:47:21] <joannac> oh wait, because I told you the wrong thing
[04:47:23] <joannac> sorry
[04:47:28] <joannac> yes, the order matters
[04:49:48] <morenoh149> okay got it. I feel like it shouldn't be order dependent though. are the other clauses order dependent? say call group then match
[04:51:35] <joannac> no, but they'll get rewritten if possible and if it's more efficient to do so
[04:53:06] <Boomtime> morenoh149: you think that "limit 20 then skip 10" should do the same thing as "skip 10 then limit 20" ?
[04:54:46] <cba321> hi every1
[04:55:02] <Boomtime> regarding group/match, if i group and create a count field, then match on that count field, how could the order of those operations not matter?
[04:55:11] <Boomtime> hi cba321
[04:55:27] <cba321> I gotta a question for u guys
[04:56:00] <cba321> can i use an operator in the $setOnInsert operator ?
[04:56:26] <cba321> for example addToSet
[04:56:39] <Boomtime> what do you think the result would be?
[04:57:00] <cba321> ok let me explain a bit more
[04:57:08] <cba321> i am trying to do an upsert
[04:57:09] <Boomtime> you are inserting already, by definition, creating the document, what is addToSet going to do?
[04:57:25] <cba321> my update is a addToSet to a existing field
[04:57:30] <cba321> I x
[04:57:47] <cba321> I ll add a new elmt to an existing set
[04:58:13] <joannac> cba321: setOnInsert only applies when the upsert resolved to an insert
[04:58:31] <joannac> if it's an insert, the document does not exist yet
[04:59:37] <cba321> if it doesnt match then I upsert, and insert a new document with in that "event" field a set containing the new elmt and another old elmt
[04:59:56] <cba321> i dont know if thats clear enough :/
[05:00:02] <joannac> no
[05:00:42] <cba321> ok my input is a array of size 2
[05:01:10] <cba321> event = [(event1, timestamp), (event2,timestamp)]
[05:01:19] <cba321> and i do
[05:02:25] <cba321> update(<query>,{$addToSet : {events : event[-1]}, $setOnInsert : {events : event}},upsert=true)
[05:02:35] <cba321> so that if there is upsert
[05:02:57] <cba321> events is the full array
[05:03:23] <cba321> but it brings up an error because field events is called twice
[05:03:30] <cba321> is that clearer ?
[05:03:40] <cba321> I'd really appreciate help :)
[05:08:04] <morenoh149> Boomtime: I think it'd totally be okay if mongodb didn't care in what order the clauses were provided in. The aggregate pipeline could always do match>group>skip>limit. As long as it was documented. Though the existing approach works too.
[05:09:07] <morenoh149> cba321: is there anyway to just get the new Items to be added to the array? would make the mongodb queries simpler
[05:10:30] <Boomtime> morenoh149: do you think that the instructions "limit 20 then skip 10" is the same set of instructions as "skip 10 then limit 20" ? because i think they are radically different
[05:11:13] <morenoh149> Boomtime: yes they are different if order matters. I originally thought the order didn't matter and that skips always execute before limits.
[05:12:03] <Boomtime> my point here is that order matters in the intention
[05:12:11] <Boomtime> it has nothing to do with mongodb
[05:12:28] <morenoh149> meh
[05:12:33] <Boomtime> "limit 20 then skip 10" = maximum of 10 items can be returned
[05:12:44] <Boomtime> "skip 10 then limit 20" = maximum of 20 items can be returned
[05:13:13] <morenoh149> dude I get that. I just thought mongodb ran skips then limits regardless of the order the statements appear in
[05:13:29] <Boomtime> it does, but it doesn't change their meaning
[05:13:51] <morenoh149> I also get that now
[05:14:00] <Boomtime> goodo
[05:15:10] <Boomtime> "skip 10 then limit 20" is read by mongodb as "limit 30 then skip 10"
[05:15:31] <morenoh149> right. joannac link made that clear
[05:15:36] <Boomtime> ok
[05:17:39] <Boomtime> cba321: https://jira.mongodb.org/browse/SERVER-10711
[05:19:02] <Boomtime> that is not a solution, that is the short-coming in $setOnInsert that defines your issue
[05:24:35] <cba321> thanks, but actually i got my solution :)
[05:24:54] <cba321> since addToSet only appends new elements
[05:25:10] <morenoh149> Boomtime: if you dot limit 10 then limit 20. you get 10 docs?
[05:25:13] <morenoh149> do*
[05:25:51] <cba321> I just have to addToSet <field> $each the whole array and I do not need any setOnInsert
[05:29:39] <morenoh149> likely yes
[05:32:55] <Boomtime> cba321: if that solution works for you, that is excellent news
[05:33:03] <Boomtime> morenoh149: i would expect so, yes
[05:59:16] <chanced> i'm running into some weird behavior with $near; it doesn't matter what I set the $maxDistance too I'm not seeing results which should be included
[06:01:48] <joannac> chanced: gist / pastebin
[06:02:01] <chanced> yea, trying to figure out how best to go about pasting it
[06:02:51] <chanced> joannac: 1 sec, thanks :)
[06:21:24] <chanced> joannac, ugh, this would be a convoluted paste so I tried to keep it simple http://pastebin.com/JBAkQYWL
[06:22:16] <chanced> joannac: the $maxDistance is 100 miles which the missing record is within
[06:22:40] <joannac> chanced: huh?
[06:23:35] <chanced> joannac: essentially I'm seeing records drop out when I use $near despite it being within the distance
[06:23:59] <joannac> chanced: sqrt( (80.92737499999999784 - 81.17336599999999)^2 + ( 34.109264000000003136 - 34.078567)^2) gives 0.24789892676
[06:26:49] <chanced> joannac: I was dividing by 3,959
[06:27:59] <joannac> what's your index?
[06:28:12] <chanced> 2d
[06:28:44] <chanced> yea, even if i bump it up to 20
[06:28:48] <chanced> it still isn't showing
[06:28:49] <chanced> hmmph
[06:30:22] <chanced> is it possible that records didn't get indexed?
[06:39:46] <joannac> chanced: what version of mongod
[06:40:04] <chanced> joannac: 2.6.5
[06:44:31] <joannac> chanced: works for me
[06:44:50] <chanced> wtf, i really don't understand why the record is dropping out
[06:45:01] <joannac> because your maxDistance is too small?
[06:45:31] <chanced> nope, i've kicked it up to 300
[06:45:35] <chanced> it doesnt matter
[06:45:58] <joannac> show me your indexes?
[06:48:01] <chanced> http://pastebin.com/tGwE6kir
[06:48:11] <chanced> they're generated by mongoose
[06:48:18] <chanced> joannac: ^
[06:49:52] <joannac> chanced: http://pastebin.com/1GjhZzDu
[06:49:56] <joannac> can you try it in the shell?
[06:52:45] <chanced> joannac: i have been, unfortunately :/
[06:53:16] <chanced> one difference between yours and mine is the background flag on my index
[06:53:56] <joannac> has the index finished building?
[06:54:41] <chanced> it should have (it has been a long time)
[06:55:31] <joannac> check db.currentOp
[06:57:02] <chanced> { "inprog" : [ ] }
[06:57:53] <joannac> okay, pastebin the second query in my pastebin
[06:58:11] <joannac> on your system
[06:58:16] <joannac> also db.version()
[06:58:51] <chanced> 2.6.5
[07:01:08] <morenoh149> how do you increment by one in an aggregation?
[07:01:24] <joannac> morenoh149: ? in what stage? $group?
[07:01:42] <morenoh149> yeah $group. then I want the number of docs that matched
[07:02:09] <joannac> $group: {_id: something, count: {$sum: 1}}
[07:02:40] <chanced> joannac: i think i may have found one of the causes; it is only returning back 100 results no matter the limit
[07:03:04] <joannac> chanced: I have no idea how that relates to your problem, but okay?
[07:03:28] <chanced> joannac: it should be 9k records; if the sort is only applying to the nearest 100 records
[07:03:38] <chanced> it wouldn't include those missing
[07:04:29] <joannac> oh, right
[07:04:39] <joannac> okay
[07:05:33] <joannac> next time, it would be useful if you mentioned you had 9k records
[07:05:39] <chanced> joannac: i really appreciate your help. its 2am here so im running on fumes.
[07:05:48] <joannac> np
[07:06:00] <chanced> joannac: sorry, i didn't realize it'd matter
[07:07:56] <chanced> yea, sob: "$near queries that use a 2d index return a limit of 100 documents."
[07:08:09] <chanced> ugh
[07:21:19] <chovy> how can i reliably parse this attribute out of this error message?
[07:21:20] <chovy> insertDocument :: caused by :: 11000 E11000 duplicate key error index: offsite-dev.tags.$name_1_type_1 dup key: { : "Perl", : "skillTag" }
[07:21:31] <chovy> basically $name_1_type_1
[07:21:36] <chovy> the attribute is `type`
[07:21:52] <chovy> actually name
[07:21:55] <chovy> $name_1
[07:22:11] <chovy> i just don't know all the possibilities here. it would be nice if it told me which field generated the error.
[07:22:20] <chovy> like in this case its $name
[07:22:40] <joannac> I think you're confused
[07:22:49] <joannac> you have a unique index on name and type
[07:23:11] <joannac> you're trying to insert a document with the same name,type pair as another document
[07:23:31] <joannac> the fields that generated the error are "name" and "type"
[07:27:03] <chovy> yeah
[07:27:20] <chovy> but the offending value is because the name is the violationg the key
[07:27:27] <chovy> is that what the $name signifies?
[07:27:45] <joannac> no, the offending value is the name,type pair
[07:28:00] <chovy> it wouldn't be .name_1_$type_1
[07:28:24] <chovy> hmmm
[07:28:33] <chovy> i have two fields in the form, name and type.
[07:28:41] <chovy> i need to know which field to surface this error to.
[07:28:51] <Boomtime> chovy: it is not a single field
[07:29:08] <Boomtime> it is the combination of two fields, name & type are both involved
[07:29:13] <chovy> i see
[07:29:22] <chovy> so i should surface error to both fields then.
[07:29:24] <joannac> if you need just "name" to be unique, you need a different index
[07:29:38] <joannac> or "type" to be unique, you need another different index
[07:29:45] <chovy> i guess i can split on '_1' and generate an error for each attrib?
[07:30:03] <Boomtime> huh?
[07:30:27] <Boomtime> you cannot split this error, changing the value of either field would fix the error - even if you kept the value of the other field
[07:30:53] <chovy> so what shoudl the ui do?
[07:31:09] <Boomtime> that is a UI choice for you to make
[07:31:09] <chovy> i usually display an error next to the erroneous input
[07:31:21] <chovy> in this case it should be both
[07:31:24] <chovy> name + type
[07:31:39] <Boomtime> correct, the combination of them though, changing one would potentially make the error go away on the other
[07:31:43] <chovy> so i need a way to build that list from this error msg
[07:32:47] <Boomtime> the index gives you the name of the index, you cannot strictly trust it to contain the fields involved
[07:32:56] <chovy> i see
[07:32:58] <Boomtime> you can name an index anything you like
[07:33:08] <Boomtime> you will need to use that name to look up in the index list what the keys are
[07:33:30] <Boomtime> sorry, the *error* gives you the name of the index
[07:33:32] <chovy> oh
[07:33:36] <chovy> how do i get an index list?
[07:33:58] <Boomtime> all indexes are contained in the special collection system.indexes for each database
[07:34:21] <chovy> k
[07:34:26] <chovy> thanks
[07:34:52] <Boomtime> each document in that collection defines one index in some other collection, the document strcuture is fairly self explanatory
[07:36:58] <chovy> what happens if i delete an index?
[07:37:08] <chovy> do the old values still retain the index constraint?
[07:37:23] <chovy> s/old/existing/
[07:39:02] <Boomtime> the index is the only thing enforcing the constraints
[07:39:20] <Boomtime> if you delete the index, you no longer have the constraint and can insert whatever you like
[07:48:20] <chovy> Boomtime: thanks
[07:48:34] <chovy> turns out i don't really need this composite index. its causing more problems than it solves.
[07:48:43] <chovy> i am just going to require unique names.
[07:50:27] <Boomtime> yep, complex features exhibit complex behavior, keep it simple
[07:54:32] <morenoh149> any help here? https://gist.github.com/morenoh149/06a93947799017822a22
[07:56:55] <Boomtime> morenoh149: group on the combination of city & state, sum the pop
[07:58:46] <morenoh149> Boomtime: you use key for that? http://docs.mongodb.org/manual/reference/command/group/#group-by-two-fields
[08:00:11] <Boomtime> morenoh149: you may be able to do it that way, i'm actually not sure, but i would use the aggregation pipeline since it's a single stage: http://docs.mongodb.org/manual/reference/operator/aggregation/group/#pipe._S_group
[08:06:59] <morenoh149> Boomtime: does that look right? https://gist.github.com/morenoh149/06a93947799017822a22
[08:07:35] <Boomtime> at a glance, looks about right.. does it work?
[08:07:59] <Boomtime> (sorry, i don't have much time now)
[08:08:01] <morenoh149> I need to first filter out cities with less than 25K people
[08:08:14] <morenoh149> it's okay I work through it on my own bit by bit
[08:08:33] <Boomtime> that's actually the best way with aggregation, slowly build up the pipeline you need bit by bit
[08:14:11] <morenoh149> so I have a row per state per city. How do I calculate the average population per city,state key?
[08:16:05] <morenoh149> err average population in the whole set rather
[08:59:23] <aaearon> http://pastebin.com/MmzarjtK how could i insert a field for each sub array inside of the documents 'codes' array?
[09:17:01] <morenoh149> solved it!
[09:20:22] <morenoh149> aaearon: so say adding foo into codes[0]? so codes[0].foo != undefined
[09:20:58] <aaearon> foo into all codes' sub arrays
[09:24:17] <aaearon> db.test.update({'orientationId': 79}, {$set: {"codes.$.invalid": false}}, {multi: true}) is what I would think, but i 'Cannot apply the positional operator without a corresponding query field containing an array.'
[09:27:36] <morenoh149> db.test.update(<query>, { $set: { codes.$.foo: foo }}, {multi: true})
[09:30:22] <morenoh149> db.test.update({ codes: 1 }, { $set: { codes.$.foo: foo }}, {multi: true})
[09:30:56] <morenoh149> the error is saying you can only do codes.$ if `codes` is mentioned in the query params as well aaearon
[09:31:46] <aaearon> hmm... so when i have multiple orientationIds, how can I add foo to each array in codes in one swoop?
[09:32:39] <morenoh149> codes: 1
[09:33:09] <morenoh149> ^ that would just let you use the $ operator I think
[09:33:14] <morenoh149> it's a passthrough
[09:34:02] <aaearon> alright your last update query isnt updating any documents it appears
[09:35:11] <aaearon> db.test.find({codes: 1}) returns nothing
[09:35:48] <morenoh149> :(
[09:37:13] <morenoh149> { codes: {$exists: true}}
[09:41:14] <aaearon> http://pastebin.com/pM5aFTAS however i feel that we're close
[09:42:40] <morenoh149> ☝️
[10:11:47] <Constg> Hello there, do you know if to use Read preferrences on Secodary preffered (PHP Client), the secondary needs to have slaveOk = true?
[10:48:18] <Bringi_> here you go with your shitty database http://www.heise.de/newsticker/meldung/Studenten-entdecken-Tausende-offene-Firmen-Datenbanken-im-Internet-2545183.html
[10:48:51] <Zelest> uhm?
[10:49:09] <Bringi_> 40.000 MongoDB servers world-wide open
[10:49:14] <Bringi_> we call it world wide web
[10:49:18] <Bringi_> MongoDB stinks
[10:49:28] <Zelest> obvious troll is obvious
[10:49:31] <Bringi_> and it shows how stupid the MongoDB users are
[10:49:39] <Bringi_> idiots should not do database work
[10:49:57] <Zelest> indeed, and ranting on IRC shows how smart you are or what?
[10:50:25] <Bringi_> MongoDB, the database made by idiots for idiots
[10:50:33] <Bringi_> database scum
[10:50:51] <Derick> hmm?
[10:50:54] <Zelest> ^
[10:50:59] <Bringi_> calling mum?
[10:51:18] <Bringi_> MongoDB, the most horrible shit on the web
[10:51:24] <Derick> Bringi_: lay it off
[10:51:32] <Bringi_> used by database scum
[10:51:43] <Zelest> ty
[10:52:45] <Derick> of course, they continue in PM
[10:53:12] <Zelest> Haha, big surprice :)
[10:53:26] <andrer> I wonder what motivates people like that.
[10:53:52] <Zelest> or what their goal/hopes are.
[10:55:21] <Derick> I told him that if he wants to dicuss MongoDB he can email me
[10:55:30] <Derick> but I don't get called a nazi often
[10:56:29] <Derick> well, I call myself a feminist - is that the same?
[10:56:44] <Zelest> to me, it is :)
[10:56:59] <Zelest> and worth mentioning, feminism is not the same as equality. :P
[10:57:02] <Derick> ok.
[10:57:14] <Derick> I am going to disagree, but that topic is not for here.
[11:01:22] <Zelest> ugh, i'm bored at work now :(
[11:25:08] <morenoh149> -.-
[11:25:26] <Zelest> uhm?
[11:25:50] <morenoh149> bored as also
[11:26:30] <morenoh149> managed to finish all the homework due this week for the mongo mooc. go me
[11:27:11] <morenoh149> this on the other hand http://cispa.saarland/wp-content/uploads/2015/02/MongoDB_documentation.pdf
[11:27:25] <morenoh149> make me sad Several thousand MongoDBs without access control on the Internet
[11:28:51] <Zelest> but then again, the same thing was common for both FTP and MySQL back in the days.. sad indeed though
[11:29:00] <morenoh149> put that in the topic ☝️
[11:54:00] <fhain> now with the MongoDB gate today with 40.000 exploited installations...will be see a mass-layoff to "certified" MongoDB engineers?
[11:54:42] <andrer> There will however be a mass-layoff of "certified" trolls :)
[11:55:20] <fhain> so companies will layoff their IT scum and then run to MongoDB Inc for security audits....clever business model!
[11:55:25] <andrer> http://i.imgur.com/oHOkxmp.gif
[11:55:55] <fhain> as always said: idiots should not be in charge for databases
[11:56:08] <fhain> and obviously there are > 40.000 tards doing MongoDB work
[11:56:13] <andrer> fhain: Yeah, baby, talk dirty to me
[11:56:21] <andrer> fhain: I love it when it dirtytalk in bed.
[11:57:54] <fhain> obviously the MongoDB scum does not know the difference between localhost and public IP ...but that's typcial for this overhyped database garbage
[11:58:00] <fhain> made by idiot for idiots
[11:58:07] <fhain> made by idiots for idiots
[11:58:13] <fhain> made by MongoDB scum for MongoDB scum
[11:58:37] <fhain> now kick me
[11:59:19] <andrer> fhain: Please, no, don't stop the drity talk. I love it when you tell me what a bad boy I've been
[11:59:25] <kexmex> ?
[12:00:00] <fhain> better gp to bed and think about a better database for your job
[12:01:18] <andrer> fhain: Ooh, but I'm already in bed.
[12:38:45] <StephenLynx> i find it funny
[12:38:51] <StephenLynx> when people hate something
[12:39:01] <StephenLynx> and instead of just staying away and expressing their opinions when asked
[12:39:30] <fhain> freedom of speech?
[12:39:32] <StephenLynx> they go out of their way to shit talk on the subject
[12:39:37] <StephenLynx> not saying you are not free
[12:39:47] <StephenLynx> you can talk w/e you want about anything
[12:39:55] <StephenLynx> but you put so much energy on it.
[12:40:19] <StephenLynx> first of all, any tech has it flaws. people didnt ditched openssl because of heartbleed
[12:40:24] <StephenLynx> or shell because of shellshock
[12:41:44] <StephenLynx> and if mongodb is so shitty and useless, you are free to use other dbs, no one is trying to convince you of anything here, no one invited you.
[12:42:17] <fhain> things must be said
[12:42:40] <StephenLynx> mongo had an exploit. ok. got it.
[12:42:52] <StephenLynx> that indeed had to be said.
[12:43:17] <fhain> and it had to be said that MongoDB work is done by idiots
[12:43:20] <StephenLynx> but I will try and learn what the fuck is mongogate before engaging on specifics.
[12:43:33] <fhain> and I would fire an engineer producing such a leak
[12:43:49] <StephenLynx> afaik, mongo is FOSS, right?
[12:43:57] <StephenLynx> you can't really fire people who volunteer.
[12:44:25] <StephenLynx> and any software has flaws, any software has exploits.
[12:44:50] <StephenLynx> as I already mentioned, hearbleed and shellshock. among many others for sure, I don't follow security too closely.
[12:45:03] <kexmex> what's the Mongo exploit?
[12:45:08] <StephenLynx> dunno, trying to find out.
[12:45:30] <kexmex> link if you get pls
[12:45:30] <StephenLynx> google'd it, found out is trending on twitter.
[12:46:08] <StephenLynx> http://blog.schmichael.com/2011/11/05/failing-with-mongodb/ and http://seancribbs.com/tech/2011/11/07/mongodb-and-riak-in-context-and-an-apology/ reading both right now.
[12:46:29] <_rgn> 2011
[12:46:33] <StephenLynx> yeah
[12:46:36] <StephenLynx> I fucked up on that
[12:46:38] <StephenLynx> :v
[12:47:06] <kexmex> heh
[12:47:13] <kexmex> why dont fhain link
[12:47:35] <fhain> http://www.faz.net/aktuell/technik-motor/computer-internet/sicherheitsluecke-entdeckt-millionen-kundendaten-im-internet-frei-zugaenglich-13418820.html
[12:47:39] <fhain> use google translate
[12:47:48] <fhain> 40.000 exploited high profile sites
[12:48:09] <StephenLynx> any material on english?
[12:48:52] <kexmex> 40,000
[12:48:54] <kexmex> or 40.00? :)
[12:48:56] <fhain> see, you are even to retarded using Google Translate
[12:49:02] <StephenLynx> hurr durr
[12:49:03] <fhain> fourty thousand
[12:49:09] <StephenLynx> maybe I don't trust that piece of shit translator
[12:49:11] <kexmex> fhain: probably fake if not in english
[12:49:32] <fhain> typical MongoDB ignorant and arrogance
[12:49:37] <fhain> this is the MongoDB sink
[12:49:38] <StephenLynx> derpderpderp
[12:50:00] <StephenLynx> so this is it? a random german article?
[12:50:56] <fhain> http://twitter.com/wood5y/statuses/565124377042112512
[12:51:16] <fhain> http://t.co/ZdIP9O3Br6
[12:51:25] <StephenLynx> ok, now were getting somewhere.
[12:51:29] <StephenLynx> reading it
[12:52:40] <StephenLynx> ok, so the problem is that the default install does not require authentication, is that it?
[12:52:50] <StephenLynx> but the default install doesn't allow external connections either.
[12:52:59] <kexmex> wait
[12:53:02] <kexmex> so this is for ppl
[12:53:09] <kexmex> who dont portscan their servers
[12:53:14] <kexmex> to make sure nothing extra is open
[12:54:10] <kexmex> fhain: this is user fail, not mongo fail
[12:54:20] <StephenLynx> the MongoDB service default con guration enables local access only. Its main con guration le is
[12:54:20] <StephenLynx> usually found at:
[12:54:27] <kexmex> However, a common setup and scalable solution for most Internet services is to have a database server running on one physical machine, while the services using this database service are (often virtualized) running on another machine. In this case, the easiest solution is to comment out the flag ≪ bind ip = 127.0.0.1≫ or to remove it completely2, which defaults to accepting all network connections to the database.
[12:54:32] <kexmex> user has to fail tho
[12:54:44] <kexmex> also your firewall needs to be open
[12:54:56] <StephenLynx> MUH FORTY THOUSAND! D:
[12:55:12] <StephenLynx> yeah, nah, this is bullshit.
[12:55:28] <StephenLynx> one has to fuck up incredibly to be vulnerable.
[12:56:22] <StephenLynx> from what I understand all these people did was to scan for machines running mongo
[12:56:32] <StephenLynx> and say "yeah, they are all vulnerable lol"
[12:57:07] <fhain> StephenLynx: you're an idiot
[12:57:13] <StephenLynx> :^)
[12:57:17] <fhain> problem 1) MongoDB default settings
[12:57:32] <fhain> problem 2) idiots doing MongoDB that hardly can write their names
[12:57:40] <StephenLynx> these default settings allow for local connections only. what is the problem with them?
[12:57:58] <StephenLynx> 2- if you are going to judge a tech by its users no one would use PHP nor mysql.
[12:58:23] <StephenLynx> and jesus christ, javascript. who in their own mind would use javascript if it were to be judged by its developers?
[12:58:45] <StephenLynx> you are fucking retarded and ignored. brb lunch
[12:59:14] <fhain> as said: MongoDB is the database from idiots for idiots
[13:02:40] <fhain> and mongod is listening on *all* interfaces by default
[13:02:43] <fhain> here you go
[13:02:48] <fhain> insecure default settings
[13:02:55] <fhain> no default passwd
[13:03:00] <fhain> no default security policy
[13:03:07] <fhain> mongodb is garbage
[13:05:34] <andrer> fhain: Why are you using mongo if you think it is so bad?
[13:06:04] <andrer> Since you are so clever, one should think you were able to see this problem and avoid it.
[13:06:09] <fhain> i am healed
[13:06:25] <fhain> just interesting to see how many IT tards are still doing mongodb shit
[13:06:39] <andrer> But it seems like you have been just as stupid as these "idiots" as you call them.
[13:06:54] <kali> dont feed the troll, please
[13:07:01] <fhain> at least I know the difference between localhost and public ip
[13:07:27] <andrer> fhain: And still you made such a elementary mistake? Awkward!
[13:07:58] <fhain> nope, I learned in 1990 what an IP address is
[13:08:10] <fhain> mongodb work done by idiots
[13:08:13] <fhain> just look around
[13:08:27] <fhain> the IT scum of the IT workers can be found here
[13:09:08] <fhain> come on, kick me
[13:17:05] <aaearon> u mad?
[14:18:45] <fhain> MongoDB now valued $1.6 billion
[14:18:49] <fhain> and where is the value?
[14:18:54] <fhain> providing a shitty database?
[14:23:35] <fhain> MongoDB is the new Oracle
[14:26:12] <fhain> Only retarded idiots use MongoDB
[14:44:22] <jerev> When I run this find query through mongolab, I get results. But when I run the exact same, hardcoded, query through mongojs db.collection.find, I don't get any. -- Any idea what the reason could be? https://gist.github.com/anonymous/47cf8de9888dd77f70d6
[14:45:22] <cheeser> by mongojs you mean the shell?
[14:45:44] <jerev> https://github.com/mafintosh/mongojs
[14:45:57] <cheeser> oh. i've never used it.
[14:51:03] <StephenLynx> that is not a complete query
[14:51:07] <StephenLynx> it a command
[14:51:11] <StephenLynx> or a *
[14:51:51] <StephenLynx> and I suggest using the official driver.
[14:52:02] <StephenLynx> npm install mongo
[14:52:20] <jerev> How is it not complete?
[14:52:48] <StephenLynx> that is just a json object that could be used in a match block.
[14:52:57] <StephenLynx> It doesn't tell me everything you are doing.
[14:56:41] <jerev> I have an object like "publishedOn: {$date: ....}" I want to query on, fetchs the results that match between 2 dates.
[14:59:55] <StephenLynx> first use your terminal mongo client
[15:00:04] <StephenLynx> and just run a db.collection.find()
[15:00:08] <StephenLynx> to know everything it contains
[15:00:20] <StephenLynx> then query what you want in the terminal
[15:00:26] <StephenLynx> then you know you have the data
[15:00:31] <StephenLynx> and the problem lies in your driver
[15:00:35] <StephenLynx> that is not official
[15:11:07] <fhain> mongodb: insecure by default - what a design feature
[15:31:53] <chanced> is there anyway to get around the 100 record limit on 2d indexed searches? I'm attempting to sort by a different field but the $near filter is interfering with the sort
[15:34:21] <cheeser> 100 record limit?
[15:34:34] <chanced> yea, it surprised me too
[15:34:42] <cheeser> url?
[15:35:15] <chanced> http://docs.mongodb.org/manual/reference/operator/query/near/
[15:35:31] <chanced> "The result set contains at most 100 documents."
[15:36:48] <cheeser> maybe geonear instead? with a 2dsphere index?
[15:36:55] <Constg> Hello there, do you know if to use Read preferrences on Secodary preffered (PHP Client), the secondary needs to have slaveOk = true?
[15:37:18] <cheeser> you set that on the connection
[15:37:56] <chanced> ugh
[15:38:17] <chanced> yea, thats what i was afraid of. thanks man
[15:41:10] <cheeser> that's just a guess. the geo stuff is a bit of a foggy area for me.
[15:43:03] <chanced> cheeser: understood; same goes for me
[15:43:14] <chanced> cant tell you how long i banged my head against the wall trying to figure out wtf records were missing
[16:30:25] <kazimir> Hi, can someone advise the best technique for master-master replication? I know that it's currently not supported in Mongo, however I believe there are some application level solutions to achieve this. Can someone maybe share experience/opinion? So far I was able ti find https://pypi.python.org/pypi/MongoMultiMaster , but haven't tested yet. Also, are there any plans to implement multi-master replica soon? Thanks
[16:34:57] <cheeser> best technique advice is to upgrade to something modern and use replica sets
[16:35:12] <fhain> if you want multi-master then use a different DB
[16:35:27] <fhain> replica sets and modern=
[16:35:27] <fhain> ?
[16:36:04] <fhain> use Cassandra or Crate.io when you are looking for real distributed databases instead of the MongoDB toy
[16:37:11] <fhain> Replica sets are a very poor replacement in a distributed environment and an indicator that the mongodb devs can't do better just easy
[16:37:19] <fhain> the mongodb replication story is a flaw
[16:37:22] <GothAlice> That's one opinion.
[16:37:39] <fhain> yes
[16:37:54] <andrer> I don't think it is fair to say that fhain has opinions, that's an insult to all other opinions.
[16:37:59] <fhain> and the opinion of many people working with real distributed databases
[16:38:56] <kazimir> before we run into some sort of flame, I'm being a little bit limited with the database I can use in the current solution (opensips). I can use Cassandra indeed, but I was gonna give a change to Mongo as I like the speed and everything. Plus I'm lazy to re-code everythin ;).
[16:39:10] <StephenLynx> fhain is buttmad because some people scanned for mongo servers and they considered they were all vulnerable because you can fuck up and set them in a way they become vulnerable, despite the default settings making them secure :^) I just put him on ignore
[16:39:23] <andrer> fhain: I'm fairly certain that you were one of these people who exposed their MongoDB to the internets, got burnt by it, and you are now fired from whatever job you had, am I correct?
[16:39:36] <cheeser> please don't engage him
[16:39:42] <fhain> I setup a cluster with Crate.IO on 10 boxes with *real* replication across 10 nodess
[16:39:42] <GothAlice> Heh.
[16:39:51] <fhain> replicasets are bullshit
[16:40:00] <fhain> broken technology of the 90s
[16:40:11] <fhain> replica sets don't make a distributed database
[16:40:33] <GothAlice> One wonders why people come to a support channel merely to complain instead of writing Yet Another Misguided Blog Post™. Especially when it's actually PEBKAC in 99% of cases.
[16:40:38] <fhain> crap sells
[16:40:48] <StephenLynx> PEBKAC?
[16:40:56] <GothAlice> Problem Exists Between Keyboard And Chair
[16:40:58] <fhain> any idiots sell crap easily
[16:40:59] <StephenLynx> kek
[16:41:26] <StephenLynx> here in brazil we have an acronym for that too
[16:41:26] <StephenLynx> BIOS
[16:41:33] <StephenLynx> bicho burro operando o sistema
[16:41:41] <GothAlice> lol
[16:41:48] <StephenLynx> translates more or less to "dumb animal operating the system"
[16:42:16] <cheeser> finally!
[16:42:25] <kazimir> uh
[16:42:34] <kazimir> that was a bit weird
[16:42:38] <StephenLynx> what
[16:42:40] <kazimir> why so much hate from him?
[16:42:45] <GothAlice> Trolls be trollin'.
[16:42:51] <kazimir> :)
[16:42:51] <cheeser> he's a troll. i think he posted to the mailing list, too.
[16:43:17] <StephenLynx> I dont think hes just a troll.
[16:43:20] <GothAlice> https://blog.serverdensity.com/does-everyone-hate-mongodb/ < I love the Server Density folks for their MongoDB blog.
[16:43:22] <StephenLynx> hes way too passionate on his hate.
[16:43:32] <GothAlice> StephenLynx: Indeed. Hate is such a waste of energy.
[16:43:33] <StephenLynx> I think something happened on his job.
[16:43:40] <andrer> I'm fairly certain that he's lost his job because of it
[16:43:46] <StephenLynx> and he got #rekt :^)
[16:43:50] <andrer> Yeh
[16:44:02] <StephenLynx> maybe not, maybe someone chosen mongo over whatever he proposed.
[16:44:08] <GothAlice> Pro tip: needs analysis comes _before_ implementation.
[16:44:08] <StephenLynx> chose*
[16:44:11] <fhain> <StephenLynx> and he got #rekt :^) - YOU ARE A RUDE ASSHOLE
[16:44:32] <cheeser> trolls--
[16:44:35] <GothAlice> lol
[16:44:47] <kazimir> a little bit childish
[16:45:02] <andrer> Haha
[16:45:19] <andrer> Fairly certain that's the triggerpoint indeed :D
[16:45:24] <kazimir> anyway if someone has some good thouthgs about the multi-master in mongo, let me know :)
[16:45:37] <GothAlice> kazimir: Alas, multi-master isn't a thing.
[16:46:02] <kazimir> GothAlice: can you elaborate, please?
[16:46:13] <GothAlice> Coordination in a setup like that is not easy, and would add even more latency than simply having one primary further away from your application.
[16:46:43] <kazimir> understand
[16:46:52] <fhain> don't listen to GothAlice, she is speaking bullshit
[16:47:02] <GothAlice> Annoyed enough to leave, not annoyed enough to stop watching the IRC logger.
[16:47:03] <GothAlice> lol
[16:47:05] <fhain> go for a reasonable distributed DB
[16:47:06] <GothAlice> So bitter.
[16:47:24] <cheeser> s/bitter/dumb/
[16:47:39] <StephenLynx> why hes leaving and joining?
[16:47:44] <fhain> the dumbass is you, GothAlice , incompetent MongoDB scum
[16:48:01] <cheeser> let's just pretend she doesn't exist and move on with our lives.
[16:49:48] <GothAlice> kazimir: What use-case do you have that is making you think you need multi-master?
[16:51:04] <kazimir> GothAlice: geo redundant SIP proxy and reading/writing to it's own mongodb
[16:51:19] <kazimir> SIP proxies I should have said
[16:52:56] <GothAlice> kazimir: Considering a new primary will be elected (pretty quickly) in the event of a failure of the "current" primary, the question remains, why multiple masters? There should probably be a single canonical version (primary) which the individual regions update from, no?
[16:56:13] <fhain> "Why multiple masters" is the standard reply because these idiots can not deal with multi master, just easy
[16:58:23] <chanced> damnit, there has to be a way to get a 2d $near sorted.. i really dont want to change this freaking index
[16:58:24] <chanced> ugh
[16:58:33] <andrer> fhain: Are you looking for a new job? We have a opening here for people with experience with Cassandra
[17:00:04] <GothAlice> kazimir: I use MongoDB as distributed queue and storage for intra-host messaging on a 200+ VM cluster on one hand, for several dozen smaller applications on the other hand, and on the gripping hand also store 26+ TiB of structured (and BLOB) data using it. None of these uses has ever needed or even desired multi-master.
[17:02:00] <kazimir> GothAlice: thanks. I certainly need to rethink the design and see what I can achieve with the opensips mongo driver. I had some problems with connecting to replicaset and always had to connect to primary server directly only. I might have done some mistakes in the opensips configuration, but will need to double check what is possible whith that driver and what not.
[17:03:23] <chanced> if I pass in the sort to find rather than appending a call to sort on find, will it change the order of operations?
[17:04:04] <ehershey> no
[17:04:12] <GothAlice> chanced: It shouldn't. Until you actually iterate across the cursor, AFIK, you're only "building up" the properties on it anyway.
[17:04:33] <chanced> yea, that's what I was thinking
[17:04:43] <chanced> FML, there has to be a way around this damn 100 record limit BS
[17:04:48] <chanced> (sorry, frustrated)
[17:04:55] <GothAlice> I know not of this 100 record limit…
[17:05:03] <cheeser> 2d indexes
[17:05:07] <GothAlice> Aaah.
[17:06:36] <chanced> cheeser: 2dsphere didnt help either :|
[17:06:47] <cheeser> chanced: great news! the upcoming 3.0 release apparently doesn't have that limitation any more
[17:06:55] <chanced> ugh
[17:07:04] <cheeser> you know you want it.
[17:07:14] <chanced> I know i want to punch it in the crotch
[17:08:11] <chanced> i wouldnt even care about the 100 record limit if it just let me sort beforehand
[17:09:08] <RoyK> hi all. is the open source mongodb intentionally crippled with SSL disabled?
[17:09:31] <cheeser> i think the newer downloads have it.
[17:09:46] <cheeser> it was a linking issue across linux distros, iirc
[17:09:46] <RoyK> cheeser: thanks - will check
[17:10:06] <chanced> i'm about to say screw it and just go to elasticsearch
[17:10:06] <GothAlice> RoyK: I use Gentoo, which compiles it from source, eliminating that particular issue.
[17:10:17] <cheeser> gentoo++
[17:10:19] <StephenLynx> install gentoo :v
[17:10:53] <StephenLynx> I know that joke came from 4chan, but I wonder exactly how, since the FSF doesn't condone it and RMS doesn't use it.
[17:13:12] <GothAlice> StephenLynx: On that cluster I mentioned earlier a kernel compile from depclean takes < 50 seconds. :3 I love integrated distcc support.
[17:20:28] <RoyK> anyone that knows centos7 repos with mongodb supporting ssl?
[17:20:42] <RoyK> (no, I didn't choose that this thing should be running centos)
[17:21:56] <thejav> a few days ago i experienced some outage related to mongodb, where it looked like an indexed query, which is run hundreds of times per second in a collection with millions of documents, was suddenly not using an index. I have the profile records which confirm that it was doing a full scan.
[17:23:13] <chanced> 2dSphere + $near worked somehow
[17:23:14] <chanced> ugh
[17:23:26] <thejav> This started happening about a day after I created a new index on this collection. When mongo got in this state, it was thrashing hard enough that I had to restart it. About a day later it happened again, so as a stab in the dark I dropped the new index, and the problem hasn't happened since.
[17:23:41] <GothAlice> RoyK: http://www.dagolden.com/index.php/1711/how-to-build-mongodb-with-ssl-for-linux/ may be old. AFIK SSL is only compiled into the Enterprise official binaries.
[17:24:07] <thejav> I'm trying to figure out why this happened. Has anyone seen anything like this before?
[17:25:33] <GothAlice> thejav: MongoDB may ignore an index for a variety of reasons. Evaluating if the index would be helpful or not may have taken too long, it may have (mistakenly) assumed using the index on that query would have actually taken longer, etc. On critical code paths I always hint indexes to try to reduce the likelihood of index misuse.
[17:26:39] <thejav> GothAlice: thanks. can you point to any documentation around why an index might suddenly get ignored?
[17:27:52] <GothAlice> thejav: http://docs.mongodb.org/manual/core/query-plans/#read-operations-query-optimization
[17:28:14] <GothAlice> RoyK: Thus recompiling. So, $0.
[17:28:15] <GothAlice> ;)
[17:28:44] <RoyK> GothAlice: yeah, perhaps setting up a repo too ...
[17:31:52] <thejav> GothAlice: thanks for the link. Currently I feel this collection is in a state where adding an index will break things again. Is it the case that for sufficiently large collections, index hints are required to avoid this weird behaviour?
[17:32:46] <GothAlice> thejav: I found it was more an issue of upgrading MongoDB versions changing the planner enough to switch or stop using indexes. The link I gave does mention situations where cached plans are thrown out, however, and things can change when that happens.
[17:33:32] <GothAlice> Adding an index in the foreground would certainly interrupt operation, however can you not create indexes in the background? (This requires a lot more RAM and a huge amount of time on large collections, but it won't lock the collection.)
[17:35:51] <thejav> GothAlice: I'm definitely creating indexes in the background. When I did this last, the issue happened about a day later, and then again about a day after restarting mongo. What I meant was, if I add any indexes in the future, I feel like this will start happening again.
[17:38:56] <GothAlice> Index creation certainly invalidates existing query plans.
[17:39:11] <GothAlice> Hinting is the single best way to ensure reliable use.
[17:39:59] <chanced> lol, i just saw this: "$near always returns the documents sorted by distance. Any other sort order requires to sort the documents in memory, which can be inefficient. To return results in a different sort order, use the $geoWithin operator and the sort() method. "
[17:40:04] <chanced> god i wish I had seen that earlier
[17:46:07] <thejav> GothAlice: this collection is particularly active (thousands of new records per day), so as the page you linked to says, our query plans are getting re-evaluated all the time, previously without issue. It seems a bit silly to need to hint all queries going forward.
[18:17:44] <Hypfer> hi, I'd like to write something that constantly monitors my flats power usage, temperature etc. Basically at least 5 sensors posting data every 10(subject to change) seconds
[18:17:52] <Hypfer> is it a good idea to use mongo as the data storage?
[18:18:56] <Hypfer> how quick will the disk usage grow? will it use lots of cpu time/disk space? any way to compress old values to save space?
[18:21:05] <cheeser> growth rates can vary based on your document sizes but you can lower that volatility by enabling power of 2 sizing. http://docs.mongodb.org/manual/reference/command/collMod/#usePowerOf2Sizes
[18:22:24] <thejav> GothAlice: for the record, I'm pretty sure this was happening: https://jira.mongodb.org/browse/SERVER-14961 The new index I created was on a field that the query sorted by, and occasionally the query plan chose that (very wrong) query plan, which resulted in a (nearly) full scan.
[18:22:28] <Hypfer> so, cheeser would you say that mongo is suitable for this kind of application or would I be better off looking at different types of databases?
[18:23:23] <cheeser> for sensor data? sure.
[18:24:45] <GothAlice> Hypfer: There are a number of techniques available to reduce overhead and/or efficiently query. Because MongoDB stores the keys with the values in every single document, I use single-character keys to effectively eliminate that overhead. (Still takes 7 bytes per field, but eh, can't make it more efficient than that.) At work we pre-aggregate our statistics.
[18:25:27] <GothAlice> http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework is an excellent article on the subject of efficiently storing and querying sensor buoy data.
[18:25:36] <GothAlice> (It includes benchmarks, too.)
[18:26:39] <Hypfer> GothAlice: is there a list with tweaks like those (or would you mind quickly writing one)?
[18:27:14] <GothAlice> Hypfer: For sensor data, the link I gave should cover most aspects other than minimizing key length.
[18:27:27] <Hypfer> oh, nice
[18:27:32] <Hypfer> didn't see that one
[18:27:45] <GothAlice> The only other major item to be aware of is to not nest lists. $elemMatch can only "filter" one list per document.
[18:28:00] <GothAlice> (So more nesting than that becomes nearly unqueryable.)
[18:28:22] <Hypfer> yes, the nesting is a serious problem at work :-)
[18:28:48] <GothAlice> If it is, I have another link for your coworkers. ;)
[18:28:55] <GothAlice> http://www.javaworld.com/article/2088406/enterprise-java/how-to-screw-up-your-mongodb-schema-design.html
[18:28:57] <GothAlice> :)
[18:30:24] <GothAlice> thejav: The bug you linked was fixed in 2.6.5… running an older version?
[18:32:24] <StephenLynx> yeah, it says fixed on the very link thejav provided
[18:34:29] <cheeser> hrm. DeepSeaDiving from the mailing list looks familiar. :)
[18:42:45] <thejav> cheeser: who me? ;)
[18:43:22] <jthomas_> Anyone able to help me with a MongoDB 2.4 setup on Debian 7? I'm trying to set up multiple instances for testing purposes only, and I have the server started (mongod -f /etc/mongodb/clientname --fork) but I cannot for the life of me find info on how to connect with the 'localhost exception' that seems to be thrown around like it's obvious
[18:43:43] <cheeser> thejav: :D
[18:44:07] <cheeser> jthomas_: you can't connect at all?
[18:44:20] <jthomas_> I can't find INFO on HOW to connect!!!
[18:44:21] <thejav> the odd part is: this did happen on 2.6.4, and by chance, when I had to restart mongo, it was upgraded to 2.6.6, and I saw the issue *again* on that version. might be able to reproduce now with this info though.
[18:44:29] <cheeser> jthomas_: mongo
[18:44:31] <cheeser> that's it
[18:44:37] <jthomas_> define the port or the instance?
[18:44:47] <jthomas_> that's great to know, which it was in the docs
[18:45:04] <cheeser> unless you pass those to mongod, mongo doesnt' need them
[18:45:39] <GothAlice> jthomas_: Like it is over here? http://docs.mongodb.org/manual/tutorial/getting-started-with-the-mongo-shell/
[18:45:42] <jthomas_> i'm expecting a multi-instance setup, so they're defined in the conf file and are being passed to mongod
[18:46:10] <GothAlice> And elaborated on here: http://docs.mongodb.org/manual/reference/mongo-shell/
[18:46:25] <jthomas_> GothAlice thanks, that's for client stuff which makes sense, but there is nothing like this in the Admin section which is needed before any client stuff should even be mentioned. Thanks for the link!
[18:46:41] <GothAlice> jthomas_: Are your "instances" virtual machines, or merely a cloud of mongod processes running under one kernel?
[18:47:00] <jthomas_> multiple proceeses on one system, one kernel
[18:47:08] <GothAlice> Cool, then yes, you'll need to specify a port.
[18:47:39] <jthomas_> so maybe i'm misunderstanding a lot more; the link GothAlice sent shows how to select a DB but my understanding was that each DB had it's own conf file
[18:47:45] <GothAlice> If you're faking a replica set with sharding, then you actually won't need to mess with port numbers. Your query router should Just Work™.
[18:47:53] <cheeser> each db does not have it's own config file
[18:47:57] <GothAlice> jthomas_: No, MongoDB itself has one config.
[18:48:21] <GothAlice> You also run multiple "databases" under one mongod process.
[18:48:22] <cheeser> each mongod has its own config file. one mongod can serve many different DBs
[18:48:30] <GothAlice> cheeser: :P
[18:48:43] <jthomas_> the dev who requested this said he expected each DB to listen on a dedicated port, is that not right?
[18:48:48] <GothAlice> No.
[18:48:50] <GothAlice> Not even close.
[18:49:10] <jthomas_> (he's coming from a Sharepoint machine to this Linux machine fwiw, so maybe in his instance it is needed that way?)
[18:49:13] <cheeser> you *could* do a DB per mongod instance but that'd be terribly wasteful
[18:49:29] <GothAlice> Not to mention terrible for performance as each process will fight the others for RAM.
[18:49:36] <cheeser> yep
[18:49:41] <jthomas_> that's why he wants it not on his Windows setup, because it's wasteful there
[18:49:48] <GothAlice> ¬_¬ Well. Windows.
[18:50:02] <jthomas_> yeah i get that
[18:50:04] <cheeser> (╯°□°)╯︵ ┻━┻
[18:50:31] <jthomas_> but everything I'm reading on the MongoDB admin stuff isn't exactly what i'd call clear, so I'm here looking for better info
[18:50:52] <StephenLynx> you really should update
[18:51:01] <jthomas_> past Deb 7?
[18:51:05] <jthomas_> unlikely
[18:51:13] <jthomas_> until Deb 8 is released
[18:51:24] <StephenLynx> no
[18:51:26] <StephenLynx> mongo
[18:51:30] <GothAlice> Step 1: http://docs.mongodb.org/manual/administration/install-on-linux/ Step 2: http://docs.mongodb.org/manual/tutorial/getting-started/ Step 3: http://docs.mongodb.org/manual/tutorial/ (everything else)
[18:51:44] <StephenLynx> the current version is 2.6.7
[18:51:50] <jthomas_> yes, we're sticking to the official repos i believe
[18:52:10] <GothAlice> Who's official repo?
[18:52:13] <GothAlice> Debian, or MongoDB?
[18:52:17] <jthomas_> Debian 7
[18:52:23] <StephenLynx> so its not official
[18:52:27] <jthomas_> ok.
[18:52:29] <StephenLynx> it is 3rd party from mongo.
[18:52:32] <jthomas_> it's official for our distro.
[18:52:36] <GothAlice> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/#packages < MongoDB runs their own "official" Debian repo for MongoDB packages.
[18:52:38] <StephenLynx> but not for mongo.
[18:52:42] <jthomas_> ok.
[18:52:46] <StephenLynx> they are not maintained by mongo.
[18:53:13] <StephenLynx> it is not different than if I maintained them or crazy billy joe bob down the street.
[18:53:13] <jthomas_> regardless, we need to provide the packages that would be expected form a server that comes with default versions and no ability to upgrade.
[18:53:35] <jthomas_> er, ability to pull from non-distro repos.
[18:53:40] <StephenLynx> why?
[18:53:41] <cheeser> sounds terrible
[18:53:58] <StephenLynx> can't you just add the official repositories?
[18:54:05] <StephenLynx> they are as good as any other.
[18:54:10] <GothAlice> That thinking forced my Python library development to stall at 2.5 because of RedHat refusing to upgrade for nearly 8 years.
[18:54:15] <cheeser> in this case, better
[18:54:19] <jthomas_> Ok well we can discuss that all day, in the mean time I would really like to get this working to log in and set up DBs etc since my understanding is way off apparently
[18:54:48] <GothAlice> jthomas_: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/ <- official instructions say add MongoDB's official repo to your set to install.
[18:54:55] <jthomas_> Thank you.
[18:57:54] <jthomas_> ok so, i no longer need to set up configs for each client, I'll just run the default and start it with the init scripts, yes? So how do I stop those that I had forked already if the docs say to not use 'kill' on it?
[18:58:07] <GothAlice> Use kill on it.
[18:58:29] <GothAlice> Since this is first time setup, you probably don't have any data stored that you care about. Nuking it is a-ok in this case.
[18:59:09] <GothAlice> If you care about database segregation, turn on the directory-per-db option, by the way. That'll allow you to mount different partitions and other nifty things on a per-database basis.
[19:00:05] <jthomas_> ok so in the future, how Would I do that without Kill ?
[19:00:11] <GothAlice> With kill.
[19:00:17] <GothAlice> Kill isn't a bad thing like it sounds.
[19:00:18] <jthomas_> :-/
[19:00:24] <GothAlice> Kill is a method by which you send signals to processes.
[19:00:31] <jthomas_> yeah i get that
[19:00:54] <GothAlice> kill -HUP says "hangup" to a process, which typically signals that process to reload its configuration files without actually stopping. (I.e. that's how you "reload" your Apache, SSH, or Nginx config.)
[19:01:01] <jthomas_> yewp i get it
[19:01:37] <GothAlice> kill -9 = do not pass go, do not collect $200. The process gets immediately killed by the kernel. A normal "kill" (sending a terminate signal) gives the process time to gracefully shut down, making it perfectly safe to kill your DB processes.
[19:01:38] <jthomas_> is there a tutorial with the basic walkthrough of setting this up and adding DBs and users? the docs seem to link all over to itself without the answers that i need
[19:01:59] <GothAlice> http://docs.mongodb.org/manual/tutorial/enable-authentication/
[19:02:08] <GothAlice> jthomas_: Really, browse through the tutorials.
[19:02:09] <jthomas_> yeah i'm there.
[19:02:23] <jthomas_> I have been all morning, that's why I'm now here.
[19:02:33] <GothAlice> There's no procedure to add a db, btw. Same as there's no procedure to create a collection: use it, and it will exist.
[19:03:14] <jthomas_> gak
[19:03:36] <cheeser> well, there is one to create collections but usually that's only done when creating capped collections
[19:03:47] <jthomas_> so how to add a user and perms to a DB if it doesn't exist?
[19:04:02] <GothAlice> jthomas_: By switching to that database and creating a user, the database is created.
[19:04:13] <jthomas_> why not just tell me that??
[19:04:26] <GothAlice> jthomas_: "Use it, and it will exist." Thought that was pretty clear.
[19:04:51] <jthomas_> use it =- user, in my head, not creating roles as an administrator
[19:05:00] <jthomas_> i need to walk away for a bit
[19:05:12] <jthomas_> thanks, i'll be back once my brain digests some of this
[19:12:24] <StephenLynx> or just use mongo repositories.
[19:12:26] <StephenLynx> problem solved.
[19:18:16] <jthomas_> how would using those repos solve my inability to navigate the docs?
[19:18:56] <StephenLynx> why do you need to navigate the docs that badly?
[19:19:08] <StephenLynx> your problem was trying to use an ancient version
[19:19:17] <jthomas_> so i can understand how to set up users, dbs, perms
[19:19:35] <cheeser> you want 2.6 for that. it got much better with 2.6
[19:19:36] <jthomas_> no, that seemed to be everyone else's issue, that I was using 2.4. I had no issues there.
[19:19:44] <StephenLynx> oh, you do.
[19:19:48] <jthomas_> lol
[19:20:04] <GothAlice> New deployments should be on 2.6, not 2.4.
[19:20:14] <jthomas_> ok so when i use 2.6, the users and perms auto-exist?
[19:20:56] <obeardly> as I understand it, 2.4 won't support MMS, so 2.6 is the only way to go
[19:21:00] <GothAlice> No, but they're actually useful. ;)
[19:21:15] <jthomas_> the docs? not that i could tell
[19:23:13] <jthomas_> ok so if I go with 2.6, am I supposed to be following the Ubuntu steps?
[19:23:31] <jthomas_> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/#packages seems to focus on Ubuntu
[19:23:44] <cheeser> mms supports 2.4 just fine...
[19:25:47] <cheeser> https://docs.mms.mongodb.com/tutorial/enable-backup-for-sharded-cluster/
[19:26:21] <cheeser> replica set min version is 2.2.0 for backup
[19:27:19] <obeardly> hmmm....interesting, I was told directly by MongoDB when they were working with us to bring up our new environment, that MMS needed 2.6 or newer as a backend, maybe I misunderstood
[19:27:36] <GothAlice> MMS is several things. Provisioning, monitoring, backup…
[19:27:43] <cheeser> obeardly: you probably did
[19:56:02] <jthomas_> I've set up a database "use raycorp" and then set up perms for a user; then I exited Mongo and enabled auth=true in the config; I restarted Mongo; I log in remotedly without any auth, and I can use that DB. What do I need to lock down the DB if the Auth doesn't cut it?
[19:56:45] <jthomas_> oh i just can't do anything, but i can still select it.
[19:56:46] <jthomas_> ok
[20:36:53] <elux> hello.. just wondering if 3.0.0 has reached its stable release? i saw an announcement but says its generally available in March
[21:45:07] <rk6> HI .. I wanted to build mongodb for ppc32 bit.. does anyone know if there would be any show stoppers ?
[21:49:12] <Derick> there is code in the source that will absolutely not work
[21:49:21] <Derick> it's specific x86 assembly
[21:56:46] <ianp> really, why does it have machine code?
[21:57:09] <ianp> is it in some of the hotspots?
[21:57:13] <ianp> just curious
[21:57:56] <tylerdmace> http://www.df.lth.se/~pi/mongo_big_endian.html
[21:58:37] <tylerdmace> Looks like some old efforts to get it supported
[21:58:48] <tylerdmace> but stagnant projects nowadays
[22:02:42] <jthomas_> I seem to be locked out of my Mongo 2.6 setup; I've set "db.createUser" against ""userAdminAnyDatabase", db: "admin"" but now I can't seem to log in with that user
[22:03:10] <jthomas_> can anyone advise me on how to log in?
[22:03:54] <jthomas_> other than turning off Auth
[22:07:10] <jthomas_> "mongo admin -uusername --password" got it.
[22:14:22] <rk6> Derick, ianp: thanks..I was actually trying to install graylog 2 which depends on java and mongodb. So I subsitiued java for openjdk (vs oracle) and wa thinking of building mongodb but from an attempt in the past to port Java I don't think I want to venture out on that exercise
[23:06:04] <tarwich> I'd like customers to be able to upload spreadsheets (11,000 rows) into my database and use them with the website. The spreadsheets don't share any data. What's the best strategy? New collection for each upload?
[23:06:28] <tarwich> If mongodb has support for something like this I'm happy to RTFM, but I just need a little direction.
[23:07:07] <tylerdmace> how many spreadsheets can each user upload
[23:07:13] <tylerdmace> a single sheet per user?
[23:07:18] <tylerdmace> or many
[23:09:45] <tarwich> Many
[23:10:23] <tarwich> I have ~25 users, with ~100 sheets ea.
[23:14:03] <tylerdmace> i'd have a collection for sheets then
[23:14:15] <tylerdmace> with ids that refer to the user they belong to
[23:14:54] <tarwich> Index on user and perhaps list name? So then each upload adds thousands of rows to the meta-sheets collection?
[23:26:55] <hicker> Is it possible to do something like this? $not: { $and: [ { field: 'String' }, {'field': { $not: /String/ } } ] }
[23:28:39] <hicker> I want to exclude results where (A) a field equals something, and (B) another field does not equal something
[23:43:09] <joannac> hicker: presumably for different fields ?
[23:43:17] <hicker> Yes :-)
[23:43:30] <joannac> why do you need the $and then?
[23:44:26] <hicker> Because I want it to exclude when field1 == 'foo' and field2 !== 'bar'
[23:44:48] <hicker> Both conditions should be met for exclusion
[23:45:03] <joannac> that doesn't explain why the $and is necessary
[23:45:13] <joannac> multiple predicates come with an implicit $and
[23:46:38] <hicker> Ohk, that makes sense. So it'd be something like this? $not: [ { field: 'String' }, {'field': { $not: /String/ } } ]
[23:51:46] <joannac> hicker: no. go look up the actual operators
[23:51:50] <joannac> $not can't be used that way
[23:52:08] <hicker> I know :-( That's why I'm confused
[23:52:26] <joannac> also your 2 fields are different. stop using the same identifier for them
[23:52:33] <joannac> hicker: okay, so explain what you're confused about
[23:53:31] <hicker> If $not can't take multiple predicates, I'm not sure what to use instead
[23:53:44] <joannac> rewrite it
[23:54:01] <joannac> so you don't need the toplevel $not
[23:55:15] <n3b-> hello
[23:56:17] <hicker> $and: [ { field1: 'String' }, { field2: { $not: /String/ } } ] doesn't meet the logical requirement. I'm not sure what else to use.
[23:58:16] <joannac> hicker: okay, let's walk through this slowly
[23:58:40] <hicker> Ok :-)
[23:58:51] <joannac> you want $not ( field1 = "String1" AND field2 != "String2")
[23:59:05] <hicker> Yes
[23:59:25] <n3b-> I've a quick and easy question that I can't found this answer. I'm using mongodb to store logs. Not so much (2/secondes). I'm doing some queries to show logs (dates, and types). I was wondering when indexing is usefull with data like this one (adding every seconds)? I'm a newb with mongo.
[23:59:27] <joannac> if field1 = "String1" should that be returned?
[23:59:41] <joannac> n3b-: when your query takes too long
[23:59:53] <joannac> where "too long" is whatever you decide it is