PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 13th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:07:56] <NaN> da fuk!! I only restarted the machine and now mongod doesn't wanna run
[00:13:22] <joannac> did you shut down mongod cleanly?
[00:45:46] <thewiz> hell
[00:45:47] <thewiz> o
[00:46:06] <thewiz> i am wondering how I can find a user in my users collection of my database
[00:47:22] <thewiz> a user entry looks something like this http://pastebin.com/Af0FYdte
[00:48:11] <thewiz> what commands in mongo would find a userbased on his username and what commands would update his role?
[00:59:24] <joannac> db.yourcollection.find({ "username" : "Auralix" })
[00:59:56] <joannac> db.yourcollection.update({ "username" : "Auralix" }, {$addToSet: {roles: "mynewrole"}})
[01:00:16] <joannac> http://docs.mongodb.org/manual/reference/method/db.collection.find/
[01:00:35] <joannac> http://docs.mongodb.org/manual/reference/method/db.collection.update/
[01:04:40] <Skogmo> Hello
[01:05:19] <Reflow> i have a db
[01:05:25] <Reflow> that contains tons of information about people
[01:05:29] <Reflow> like 7 million people
[01:05:40] <Reflow> but its taking too long to do a search query
[01:05:59] <Reflow> any way to optimize this
[01:07:29] <Skogmo> I'm looking to get started with mongodb but dont really wanna spend a lot of time on enviroment itself. At the moment im planning on just trying it out with PHP 5.4+. Does anyone have tip for some online hosting where you can just rent a enviroment like that for low load/development ?
[01:07:52] <thewiz> joannac: thanks
[01:08:47] <thewiz> joannac: how could i replace a role instead of add one? in case someone has a role of 'mod' and I want to demod them with a slight rewrite of the same command?
[01:09:04] <thewiz> like replace the mod array with 'user' or something
[02:07:11] <thewiz> if anyone has the time I would really appreciate http://stackoverflow.com/questions/23621712/mongodb-javascript-check-if-a-mongodb-sub-document-is-x
[03:24:16] <in_deep_thought> is there an operation for findbyIdarray? like I can give an array of id's as input and it will return an array of items from the database? or do I have to use a for loop and do findbyid on each one?
[03:24:37] <in_deep_thought> I am using mongoose
[04:39:26] <thewiz> if anyone has the time I would really appreciate http://stackoverflow.com/questions/23622897/mongodb-javascript-mongoose-find-subdocument-and-modify-its-array
[08:01:13] <sweb> after addUser the authentication not work ... i can connect without auth
[08:01:15] <sweb> http://docs.mongodb.org/manual/tutorial/enable-authentication/
[08:01:27] <sweb> createUser dosnt work for 2.6 i use addUser
[08:18:04] <rspijker> sweb: sure you have 2.6?
[08:18:25] <rspijker> and sure you are doing createUser correctly?
[08:18:30] <sweb> rspijker: problem solved using GUI robomongo but seems be the doc isn valid
[08:22:51] <rspijker> mmmk, well, as long as it;s resolved
[08:31:35] <pietia> i have a collection "devices_list" (~500 objects) - i would like to remove them and replace with new list? what would be the mongodb way of doing it? would it be better to store that list on object level (nested collection) and use update uperation which is atomic?
[09:37:19] <Nomikos> Hello. I'm setting a collection to capped at 10M bytes, but when I get the stats() it says max: NumberLong("9223372036854775807")
[09:37:46] <Nomikos> is this a known issue?
[09:38:20] <rspijker> how are you setting it to capped exactly?
[09:38:45] <Nomikos> rs0:PRIMARY> db.runCommand({convertToCapped: "debug_log", size: 10000000})
[09:38:53] <Nomikos> ^-- from the shell
[09:39:10] <Nomikos> there were 2 or 3 docs in it at the time
[09:40:08] <rspijker> I don’t see the issue…
[09:40:20] <rspijker> you’re not specifying an additional max number of documents
[09:40:54] <Nomikos> I was going by http://docs.mongodb.org/manual/reference/command/convertToCapped/
[09:41:22] <rspijker> you can specify max along side of size
[09:41:29] <rspijker> to specify a maximum number of docs
[09:41:34] <rspijker> you’re not required to though
[09:41:42] <rspijker> it will still honor the size you specified
[09:42:00] <rspijker> http://docs.mongodb.org/manual/core/capped-collections/
[09:42:04] <Nomikos> right. why does it show the max size as 80M+ TB in stats()
[09:43:28] <rspijker> no clue, might just be the largest possible long?
[09:43:42] <rspijker> seeing as how it’s not specified it might just choose the largest possible number
[09:44:21] <Nomikos> aaah.. that max refers to the nr of documents? sorry, I misunderstood you earlier
[09:44:56] <rspijker> it does, and that number is in fact the largest value a NumberLong will hold
[09:45:43] <rspijker> 2^63-1
[09:48:29] <Nomikos> Thanks :-)
[09:49:54] <rspijker> np
[10:53:29] <louisrr> how do I translate this geospatial data into php?
[10:53:40] <Derick> which data?
[10:53:44] <louisrr> "loc": { "type": "Point", "coordinates": [ 20, 20 ]
[10:54:05] <louisrr> that is the "loc" field in my collection
[10:54:27] <Derick> $collection->insert( [ 'loc' => [ 'type' => 'Point', 'coordinates' => [ 20, 20 ] ] ] );
[10:54:57] <Derick> make sure to put longitude first, and then latitude
[10:55:18] <louisrr> this is what I came up with
[10:55:21] <louisrr> 'loc' => array('type' => array('point' => array('coordinates' => array(20,20))))
[10:55:36] <Derick> that works too, I prefer [ ] over array( )
[10:55:41] <louisrr> ooh the square braces will work?
[10:55:46] <Derick> yes
[10:55:50] <louisrr> yours is much cleaner
[10:55:51] <Derick> but your arrays are wrong
[10:55:57] <Derick> type and coordinates are on the same level
[10:55:58] <Derick> not nested
[10:56:11] <Derick> [ ] was introduced in PHP 5.3
[10:57:24] <Derick> sorry, 5.4
[10:57:57] <louisrr> thanks derk
[10:58:01] <louisrr> derick
[10:58:31] <louisrr> so if I use the [] to store the spatial field I need to store the whole collection with [] correct?
[10:58:48] <Derick> uh?
[10:58:56] <Derick> sorry, I don't understand
[11:01:55] <louisrr> I setup the query in php with array() because that's what I got working before
[11:02:20] <louisrr> then I put the "loc" field in there with the [] like you demonstrated like so
[11:02:21] <louisrr> http://pastebin.com/qfxFAQ2d
[11:02:37] <louisrr> I was wondering if that would even work
[11:03:05] <louisrr> basically, should I convert the whole query over to [] and drop the array?
[11:03:06] <Derick> yes, you can mix them
[11:03:18] <louisrr> is that querry correct?
[11:03:20] <Derick> why do you do this:
[11:03:21] <Derick> 'name' => '{$name}',
[11:03:25] <Derick> you should just do this:
[11:03:29] <Derick> 'name' => $name,
[11:03:36] <Derick> no need to convert it to a string first
[11:03:41] <Derick> if it *has* to be a string, cast it:
[11:03:45] <Derick> 'name' => (string) $name,
[11:05:46] <louisrr> ok yea that's better
[11:05:52] <louisrr> thanks
[11:08:33] <louisrr> fixed it
[11:11:55] <richthegeek> is it possible to have a user with read/write on all databases? I can't seem to get the "readWriteAnyDatabase" role to work
[11:12:24] <rspijker> richthegeek: what version?
[11:12:34] <rspijker> Generally, that role should work fine on 2.4.x
[11:12:43] <richthegeek> 2.6.1
[11:14:04] <rspijker> haven’t actually worked with that, should still work afaict though
[11:15:51] <richthegeek> this is like the fourth time i've tried setting up auth over the last year, something about it just doesnt click for me
[11:15:56] <richthegeek> or the docs are godawful...
[11:16:00] <richthegeek> anyway, lunch
[11:16:10] <rspijker> it’s fairly complicated….
[11:16:26] <rspijker> might help if you tell us what you are trying to do that isn;t working
[11:22:05] <CJ_> Is it still correct that mongo does not support find and replace? The only mapreduce possible is aggregation?
[11:22:39] <rspijker> eh.. what?
[11:23:25] <CJ_> I want to do a find and replace in mongo. I originally thought I could write a mapreduce job to do this, but it does not seem possible.
[11:23:44] <CJ_> I would be reading entries from one collection in order to modify another collection.
[11:23:47] <rspijker> when you say “find and replace” what do you want to do exactly?
[11:24:23] <rspijker> yeah, that sounds like you would need javascript to do that
[11:24:32] <CJ_> Understood.
[11:31:52] <Nicekiwi> hi.. having issues since upgrading to 2.6, mainly I cant access my database anymore :/
[11:32:43] <Nicekiwi> not authorized on admin to execute command <-- get errors like that when I try to run anything on the db
[11:35:59] <rspijker> Nicekiwi: is auth turned on?
[11:36:13] <rspijker> was it before you upgraded? Are you using the default config? etc.
[11:36:17] <Nicekiwi> rspijker: i think so
[11:36:29] <rspijker> so… have you authed?
[11:36:49] <Nicekiwi> what do you mean by authed?
[11:38:00] <rspijker> authenticated. If auth is on, you need to authenticate before you can do things…
[11:39:46] <Nicekiwi> just tried it, gave an error when i tried to use -u and -p
[11:41:45] <Nicekiwi> auth failed. how do i auth? i tried with the detailsw for the specific db
[11:42:30] <kozer> Hi to everyone! Can please ask something ? I have some spatial data, and i use a geowithin based on map bounds.But i want to include filter in these results ($gt,$lt etc) based on client side.Is this possible with mongo?I searched the net but couldnt find anything relative to that! Thank you all!
[11:43:55] <Derick> sure
[11:44:10] <Derick> let me see if I have an example
[11:44:48] <Nicekiwi> rspijker: is there a common auth user?
[11:46:43] <kozer> Derick: Thank you, you are a savior!!! :D
[11:47:24] <Derick> kozer: gimme a few more
[11:50:58] <Derick> kozer: db.poiConcat.find( { 'm.ts' : { $gt: 1395232617 }, l: { '$geoWithin' : { '$geometry': { 'type' : 'Polygon', coordinates: [ [ [ -0.13, 51.508 ], [ -0.126, 51.508 ], [ -0.126, 51.507 ], [ -0.13, 51.507 ], [ -0.13, 51.508 ] ] ] } } } } );
[11:52:09] <Derick> i'll format that for you :-)
[11:52:48] <Derick> kozer: http://pastebin.com/YxYXG6aq
[11:53:04] <kozer> Derick; I thought that was something like that, but is the same working with 2d and $box?
[11:53:25] <Derick> don't use 2d anymore
[11:54:01] <Derick> you want the "2dsphere" index on "l" (the location) which has GeoJson formatted data
[11:54:05] <Derick> in my case, points:
[11:54:23] <Derick> Points, Lines or Polygons actually
[11:54:33] <kozer> Derick: if i give the same lat long to geometry polygon, is the same as $box?
[11:54:47] <Derick> I don't actually know, I have not used box
[11:54:51] <Derick> afaik, box is not geospatial
[11:54:57] <Derick> (just 2d)
[11:55:57] <Derick> http://maps.derickrethans.nl/?l=gc&zoom=6
[11:56:08] <Derick> browser viewport is box, the lines is geowithin
[11:57:52] <kozer> The problem is that i want to show only markers based on viewport, and this is why i use leaflet map.getBounds to achive that.I there any other way for this?
[11:59:32] <kozer> (I am not a geo specialist , im new to all these stuff... :) )
[12:04:19] <Derick> kozer: how big is that view area? continent size, or city?
[12:06:34] <kozer> city
[12:07:13] <kozer> oops sorry
[12:07:14] <kozer> continent size
[12:07:24] <kozer> greece to be more clear
[12:24:38] <Derick> kozer: that's not a continent :p
[12:24:59] <Derick> in that case, it would probably not matter if you would just use 2dsphere and geoWithin
[12:27:54] <kozer> Yes but when the user zoom in , i want to contain what i show to him, this is why i use this.I dont want to get markers that are not in view
[12:28:31] <Derick> sure, that's fine
[12:29:10] <Derick> exactly what I do at http://maps.derickrethans.nl/?l=flickr&zoom=18
[12:29:15] <Derick> (and that code is all public)
[12:30:14] <Derick> https://github.com/derickr/3angle
[12:31:11] <kozer> so i will take a look and ill be in touch if i cant find a way to do it.Thanks a lot!!!
[12:51:38] <Tug> how long does it take after you add a member to a replicaset to elect a new primary ? is there documentation on the subject ?
[12:52:16] <Zelest> i might be completely wrong here...
[12:52:28] <Zelest> but why would it re-elect a new primary if there already is one?
[12:52:47] <Zelest> doesn't that only happen when the primary goes away or rs.stepDown() is used?
[12:52:51] <Zelest> again, i can be wrong here.
[12:52:55] <Tug> When you have only one member for instance
[12:53:03] <Tug> and you add a new one
[12:53:08] <Zelest> you need 3
[12:53:47] <Tug> yes but I'm writing a script to attach a new instance automatically
[12:53:56] <Tug> and each instance attach itself
[12:54:12] <Tug> so the second one and the third one may join with delay
[12:55:05] <Tug> so when the 2nd instance is added the replicaset goes into a FATAL state
[12:55:20] <Tug> when the 3rd is added it cannot recover from here
[12:56:17] <Derick> Zelest: you're right. an election is only triggered if the primary disappears
[12:56:27] <Zelest> Ah
[12:56:31] <rspijker> adding a second isntance should not put it in a fatal state
[12:56:48] <rspijker> are you sure you’re adding it correctly?
[12:56:52] <Tug> mm so my issue is elsewhere
[12:57:49] <Tug> I'm running mongo primary_host --eval 'rs.add("secondary_host")'
[12:57:53] <Tug> so it looks right
[12:58:51] <Tug> what if rs.initiate is run on both instance before ?
[12:59:03] <Tug> is this going to cause a FATAL state ?
[13:02:13] <rspijker> Tug: so… the “secondary” that you are trying to add is in fact already a replica set in it’s own right?
[13:02:24] <rspijker> its*
[13:02:53] <Tug> rspijker, well I'm not sure, I am checking this right now ;)
[13:05:11] <Tug> I'm writing a juju script btw (if you ever heard about juju it looks neat but the official mongodb charm has many bug and uses the official ubuntu package not 10gen's)
[13:06:30] <rspijker> never heard of it (10gen is no more btw)
[13:07:31] <Tug> (ah yes, I forgot that)
[13:31:28] <richthegeek> mongodb-org, that was an annoying thing to realise i was still on 2.4*
[13:50:00] <tuxtoti> NumberLong("10").toString()
[13:50:09] <tuxtoti> i thought would just print 10
[13:50:24] <tuxtoti> but it prints NumberLong("10")
[13:50:52] <Derick> heh, that is silly :)
[13:50:56] <Derick> let me see
[13:53:42] <Derick> yeah, there is no toInt either
[13:53:59] <tuxtoti> .hmm.
[13:54:32] <rspijker> that’s weird...
[13:54:33] <richthegeek> .valueOf() ?
[13:54:39] <Derick> i tried that too
[13:54:40] <rspijker> NumberLong(10)+”” ?
[13:54:41] <Derick> no luck
[13:54:55] <richthegeek> parseInt ?
[13:55:00] <Derick> duh
[13:55:03] <Derick> no, valueOf works
[13:55:06] <Derick> I used ValueOf
[13:55:09] <Derick> > NumberLong("10").valueOf()
[13:55:09] <Derick> 10
[13:55:13] <richthegeek> sweet
[13:55:14] <rspijker> +”” works as well
[13:55:25] <Derick> or toNumber():
[13:55:28] <Derick> > NumberLong("10").toNumber()
[13:56:10] <rspijker> .toString()
[13:56:29] <Derick> that doesn't work though
[13:56:37] <Derick> it returns "NumberLong(10)" which is kinda useles
[13:56:46] <rspijker> it doesn’t?
[13:56:58] <Derick> it returns the json string representation
[13:57:08] <rspijker> which one exactly? :)
[13:57:15] <Derick> "NumberLong(10)"
[13:57:31] <rspijker> who suggested that?
[13:57:44] <Derick> i've no idea :)
[13:58:53] <rspijker> ok, you’ve lost me :)
[14:02:33] <tuxtoti> Derick: valueOf() works. thanks!
[14:06:41] <tuxtoti> Derick: toNumber() - "Returns: number the closest floating-point representation to this value" . Might not render the right one I guess.
[14:07:54] <Derick> true
[14:07:59] <Derick> :-/
[14:11:06] <tuxtoti> actually all of them do the same :(
[14:11:08] <tuxtoti> > NumberLong("92233720368547758")+ ""
[14:11:08] <tuxtoti> 92233720368547760
[14:11:08] <tuxtoti> > NumberLong("92233720368547758").toNumber()
[14:11:08] <tuxtoti> 92233720368547760
[14:11:08] <tuxtoti> > NumberLong("92233720368547758").valueOf()
[14:11:08] <tuxtoti> 92233720368547760
[14:11:08] <tuxtoti> >
[14:11:24] <tuxtoti> toString() should have given the right one.
[14:22:45] <rspijker> I might be mistaken here, but can’t integers always be represented exactly as floating point numbers?
[14:27:18] <kali> rspijker: in maths, integer are included in "real" numbers, but not in computers
[14:27:40] <Derick> rspijker: no they can't
[14:27:44] <Derick> not 64bit ints
[14:27:56] <Derick> 32bit ints can be represented in doubles only
[14:28:54] <rspijker> because a 64bit float only has 52 bits of preciesion for the mantissa?
[14:29:28] <Derick> yes
[14:30:07] <Industrial> Hi.
[14:30:10] <rspijker> that makes sense. Just wasn’t sure what type it’s using. There’s also extended floats, which can easily do 64bit ints
[14:30:34] <Derick> most systems use "double"s, which is the 64bit IETF type
[14:30:45] <Derick> I've not seen this extended float in use
[14:32:23] <Industrial> So I'm doing this aggregation pipeline: [{"$match":{"_id":{"$gte":1399932000000,"$lt":1400018400000}}},{"$project":{"_id":1,"v":"$v","step":{"$multiply":[{"$add":[{"$subtract":[{"$divide":["$_id",1000]},{"$mod":[{"$divide":["$_id",1000]},1]}]},1]},1000]}}},{"$sort":{"_id":1}},{"$group":{"_id":"$step","v":{"$last":"$v"}}},{"$sort":{"_id":1}}] { cursor: {}, allowDiskUse: true }
[14:33:13] <Derick> Industrial: pastebin please, and some ... formatting would be helpful
[14:33:21] <Industrial> yes, sec
[14:37:29] <Industrial> https://gist.github.com/Industrial/92a6ad9207b2a5ba59c5
[14:37:58] <Industrial> So the collections just have a _id and a v property with a value at a certain time. The _id is a millisecond timestamp.
[14:38:39] <Derick> "v":"$v",
[14:38:40] <Derick> could be
[14:38:44] <Derick> "v": 1,
[14:38:46] <Industrial> this runs fine, just not when I try to start 4-5 of these in parallel
[14:38:48] <Industrial> ok
[14:39:22] <Derick> what happens if you run them in parallel?
[14:39:31] <Industrial> I'm trying to build an export feature that gives me a CSV of the merger of several collections, each v getting its own column in the CSV
[14:39:50] <Industrial> I get a few data events (nodejs) from the aggregate stream but then it stops sending data events
[14:39:53] <Industrial> there is no end event eit
[14:39:55] <Industrial> her
[14:41:03] <Industrial> Are there better ways to merge 2 collections?
[14:42:13] <Derick> operations on more than 1 collection generally have to be done in the application level
[14:45:35] <Industrial> right
[14:47:17] <Industrial> Derick: any idea what happens if you have 2 or several streams emitting a few data events but no end and no error?
[14:47:35] <Industrial> ow, nvm
[14:47:43] <thewiz> how can I use a previously defined variable targetname, to see if targetname user has a specific role? i tried: if (targetname.roles.indexOf('mod') >= 0) { but it says targetname is underfined
[14:48:40] <Industrial> (stream.pause() is handy but not if you forget :P)
[14:49:15] <cheeser> what could go wrong? :)
[14:54:23] <Industrial> yeah the way I'm merging these streams is
[14:55:18] <Industrial> take one message from each and immediately pause the stream, if you have a value from ALL streams then take the lowest value (sort by time in my case) and push it and wait another value to be present from ALL streams.
[15:00:42] <sweb> is replication decrease write speed ?
[15:10:52] <thewiz> if anyone has a second I would appreciate it http://stackoverflow.com/questions/23635044/mongodb-javascript-check-subdocument-array-of-a-variable
[15:28:08] <martinrd> Hi guys, anyone ok to answer a quick question on listing collection names on a db with auth?
[15:29:19] <Derick> martinrd: just ask your question, people that know the answer will then reply
[15:29:43] <martinrd> Great, basically, I have a user which is dbAdmin and readWrite but seems not allowed to list collection_names
[15:29:51] <martinrd> pymongo.errors.OperationFailure: database error: not authorized for query on raconteurdb.system.namespaces
[15:30:01] <martinrd> is that correct behaviour?
[15:30:27] <thewiz> how can I findOne and check if his 'roles' array has 'mod' in it?
[15:32:15] <cheeser> i would think you'd just try to mutate the data and react to any errors.
[15:34:32] <martinrd> The intention is to drop all collections in the db, but that seems impossible to do without being global clusterAdmin, or am I wrong?
[15:38:13] <spuz> There appears to be something wrong with the docs on this page: http://docs.mongodb.org/ecosystem/drivers/java-concurrency/ It says using the WriteConvert.ACKNOWLEDGED it the same as calling
[15:38:49] <spuz> ... getLastError() but how can you get the last error without calling that method?
[15:41:12] <spuz> (sorry I had a bit of a keyboard accident earlier)
[15:46:04] <thewiz> how can I find a specific user in the collection and also check if his 'roles' array has 'mod' in it?
[15:54:02] <NaN> why mongod crashed on all my machines? (same problem on all)
[15:54:16] <NaN> all was running OK
[15:54:20] <NaN> now it doesn't want to run
[15:54:52] <abique> Hi, I'm using the cxx driver and have the following issues: http://pastebin.com/Rnbv5zES usually what generates them? Memory corruption? Race in the library? ...? Thanks :)
[15:55:18] <NaN> https://stackoverflow.com/questions/23086655/mongodb-service-will-not-start-after-initial-setup <<< something like this but with selinux disabled
[16:01:21] <ranman> NaN: what are the permissions on your data directory? or your /var/run/mongodb directory?
[16:01:34] <ranman> NaN: how did you install mongodb
[16:01:46] <ranman> NaN: what is the message that mongodb fails with
[16:02:21] <NaN> ranman: installed with yum from the oficial (mongod) repos
[16:02:32] <NaN> ranman: I just updated and it seems it's working (on this machine)
[16:03:12] <NaN> ranman: I will need to check if it works on my other machines
[19:32:50] <fifosine> Is mongoengine required to write a python app with flask?
[19:33:22] <fifosine> The only tutorial I can find that uses python flask and mongodb also uses mongoengine
[19:35:06] <fifosine> Slash, why would I want to use mongoengine?
[20:00:38] <fifosine> What function does a mapfield serve?
[20:16:02] <Guest29077> Hi, upgrade problem. going to 2.6.1 from 2.4, failure. now complaining that the user mongodb doesnt exist.
[20:16:20] <Guest29077> it does, tried removing the user, that didnt help.
[20:16:26] <Guest29077> this is on ubuntu
[20:17:38] <z1mme> { _id: 'randomId', orgIds: ['orgId', 'anotherOrgId'] }, best way to find documents using a specific orgId? $in? $elemMatch?
[20:17:45] <Guest29077> $in
[20:17:59] <Guest29077> field:{$in:['one','two']}
[20:19:07] <Guest29077> anyone got a thought on the upgrade issue?
[20:19:12] <z1mme> orgIds: { $in: ['specificOrgId']] }?
[20:19:27] <Guest29077> yeah. wrong bracket at the end, but yes
[20:19:41] <z1mme> yeah saw that =P
[20:19:43] <z1mme> thnxz
[20:19:48] <Guest29077> No problem
[20:19:54] <Guest29077> index the array
[20:20:25] <Guest29077> db.type.ensureindex({orgIDs:1})
[20:20:26] <Guest29077> or whatever
[20:20:55] <Tug> Guest29077, what is your error ?
[20:21:15] <Tug> Guest29077, and where does it occur ?
[20:21:19] <Guest29077> Hi, upgrade problem. going to 2.6.1 from 2.4, failure. now complaining that the user mongodb doesnt exist.
[20:21:24] <Guest29077> it does, tried removing the user, that didnt help.
[20:21:28] <Guest29077> ubuntu
[20:21:42] <Guest29077> running with su of course
[20:22:21] <Tug> so the error occur during apt-get install ?
[20:22:45] <Guest29077> yeah
[20:22:45] <Guest29077> Setting up mongodb-10gen (2.4.10) ...
[20:22:46] <Guest29077> /etc/init.d/mongodb: 78: /etc/init.d/mongodb: grep: not found * The user mongodb, required to run MongoDB does not exist.
[20:23:10] <Guest29077> apt-get should handle that though , i'd think
[20:23:28] <Guest29077> tail -1 /etc/passwd
[20:23:28] <Guest29077> mongodb:x:104:65534::/home/mongodb:/bin/false
[20:23:34] <Tug> Setting up mongodb-10gen (2.4.10) << looks like it's installing MongoDB 2.4
[20:23:42] <Guest29077> ah , jerk
[20:23:45] <Guest29077> i was on 2.6 befvore
[20:24:06] <Tug> so you're going to 2.4 from 2.6 ?
[20:24:32] <Guest29077> correct
[20:24:48] <Guest29077> /etc/init.d/mongodb: 78: /etc/init.d/mongodb: grep: not found * The user mongodb, required to run MongoDB does not exist.
[20:24:57] <Guest29077> that's using the mongodb-org package
[20:25:47] <Tug> mm what I would do is first remove all package mongodb-org and mongodb-10gen
[20:25:57] <Guest29077> yeah, just using apt-get remove?
[20:26:13] <Tug> maybe even with --purge
[20:26:37] <Guest29077> remove/purge fails with the same eror
[20:26:43] <Tug> but that's not the issue
[20:26:59] <Tug> then
[20:27:04] <Tug> apt-get autoremove
[20:27:04] <Guest29077> sure.
[20:27:17] <Tug> or remove each mongodb-org packages
[20:27:23] <Guest29077> auto-remove [package] ?
[20:27:26] <Guest29077> or just auto-remove
[20:27:34] <Tug> just autoremove
[20:27:54] <Guest29077> same eror. fund.
[20:27:55] <Tug> I think apt-get remove mongodb-org-* works too
[20:27:56] <Guest29077> dun.
[20:28:04] <Guest29077> * The user mongodb, required to run MongoDB does not exist.
[20:28:04] <Guest29077> invoke-rc.d: initscript mongodb, action "start" failed.
[20:28:19] <Tug> oh, apt-get remove mongodb-org causes an error ?
[20:28:42] <Guest29077> yeah
[20:28:46] <Guest29077> same user error.
[20:29:54] <Tug> getent passwd mongodb
[20:29:56] <Guest29077> checking init.d script
[20:29:58] <Guest29077> just doing that
[20:30:02] <Guest29077> it shows the user
[20:30:08] <Guest29077> but the script uses -q, which make it not show
[20:30:48] <Guest29077> oh, that's just quiet
[20:31:02] <Tug> you can always try with dpkg -r
[20:31:11] <Guest29077> getent passwd | grep "^mongodb:"
[20:31:11] <Guest29077> mongodb:x:104:65534::/home/mongodb:/bin/false
[20:32:27] <Guest29077> actually seems, like the executing script isnt finding grep?
[20:33:06] <Guest29077> failing on this line
[20:33:07] <Guest29077> if getent passwd | grep -q "^$DAEMONUSER:"; then
[20:33:13] <Tug> Guest29077, oh you might be right!
[20:33:17] <Guest29077> /etc/init.d/mongodb: 78: /etc/init.d/mongodb: grep: not found
[20:33:25] <Tug> type grep
[20:33:26] <Guest29077> paths then!
[20:33:29] <Guest29077> i do have grep
[20:34:14] <Guest29077> it has a path var, grep wasnt in it
[20:34:30] <Guest29077> ok
[20:34:31] <Guest29077> better
[20:35:01] <Guest29077> /etc/init.d/mongodb: 50: /etc/init.d/mongodb: /bin/: Permission denied
[20:35:04] <Guest29077> hm
[20:36:22] <Tug> well actually, why do you need to run the init script to uninstall mongodb
[20:36:36] <Guest29077> its doing it on its own
[20:36:37] <Tug> just stop the process manually
[20:36:45] <Guest29077> but, was the same issue with installing
[20:36:51] <Tug> if it's still complaining remove the script
[20:37:45] <Guest29077> think were close here.
[20:37:48] <Tug> and if it still does not work try something like dpkg --force-all --purge mongodb-org
[20:37:48] <Guest29077> the grep thing was an issue
[20:38:09] <Guest29077> echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
[20:38:13] <Guest29077> woops
[20:41:20] <Guest29077> ugh, just a mess
[20:45:52] <Guest29077> any way to mute the connect/disconect messages here?
[20:48:01] <LouisT> Guest29077: that depends on the client you use -- which i assume is the web client
[20:48:09] <LouisT> you'd have to look though the settings for it
[20:49:36] <Guest29077> no, im on irssi
[20:49:43] <Guest29077> ah well.
[20:50:00] <Guest29077> about to fire up a new droplet on dig. ocean, would be faster than trying to fix this.
[20:51:01] <Tug> it's possible
[20:51:19] <Tug> (sorry wrong window)
[21:00:26] <Guest29077> /etc/init.d/mongodb: start-stop-daemon: not found
[21:00:28] <Guest29077> hmm...
[21:23:53] <Guest29077> ok. jeesus
[21:23:55] <Guest29077> fixed.
[21:25:24] <moleWork> hello, I'm trying to diagnose a performance problem on mongodb... i think it might be caused by slow io causing background flush's.... but what else takes global write locks?... what is a way to diagnose this?... i'm only writing to 1 db at one time so i don't think it's the db write lock.... i'm staring at mms but i don't think it updates at high enough frequency to show me all the background flush's that are occuring
[21:26:04] <Guest29077> can you replicate with a single query, or is only when doing bulk operation
[21:26:44] <moleWork> if you are talking to me i'm doing a single insert query, but over and over.... rotating databases
[21:26:59] <Guest29077> i am
[21:27:11] <moleWork> basically wondering what causes global write locks, and how to see background flushes happenning real time
[21:27:22] <moleWork> i'm using a super slow disk that's saturated
[21:27:42] <moleWork> if i look at iostat await ... i'm like waiting 500ms some times
[21:27:44] <Guest29077> @moleWork, thats going to be a problem
[21:28:08] <moleWork> i agree it's the problem i'm just trying to understand why is it causing global write locks
[21:28:09] <Guest29077> using > 2.2 version of mongo?
[21:28:17] <Guest29077> http://docs.mongodb.org/manual/faq/concurrency/
[21:28:19] <moleWork> 2.4.9
[21:28:25] <Guest29077> im sure you've looked at that doc.
[21:28:39] <Guest29077> well, i just had a hell of a time upgrading to 2.6
[21:28:45] <Guest29077> might consider that (not a great option)
[21:28:53] <moleWork> yeah i don't think it really talks about what actually causes 2.4.9
[21:29:10] <moleWork> well i'm actually doing performance testing right now so it's not a huge deal
[21:29:19] <moleWork> just trying to visualize this problem happenning
[21:29:33] <moleWork> Tue May 13 21:28:32.281 [conn3] insert group_76.user_1076_collection_3 ninserted:1 keyUpdates:0 locks(micros) w:6477 7182ms
[21:29:33] <moleWork> <--- ieee
[21:29:49] <moleWork> it's not db locks for sure
[21:29:55] <moleWork> it's something to do with the global lock
[21:30:24] <moleWork> in mms my global write lock is at 30%
[21:31:37] <moleWork> background flush seems to be in mms but only happenning every 10 or so minutes it looks like from mms but it's saying it's taking like.... 450-550seconds!
[21:32:01] <moleWork> which means it's basically continually happenning
[21:32:06] <moleWork> so i think that's the issue
[21:32:23] <moleWork> i just "don't know how to see background flushes"
[21:32:27] <moleWork> if that makes sense
[21:33:13] <Guest29077> sure, beyone me
[21:33:55] <moleWork> same lol
[21:39:48] <moleWork> one thing is that mongostat doesn't really say i have any qr's or qw's because i'm only a single user waiting for each operation to finish
[21:40:04] <moleWork> so it's not like i'm getting hit by concurrency problem
[21:43:18] <moleWork> http://dba.stackexchange.com/questions/28630/mongodb-what-does-datafilesync-flushing-mmaps-took-ms-for-files-mea
[21:44:45] <moleWork> i think i know what is happenning... i'm inserting into lots of different indexes..... the background flush is taking forever,... meanwhile all the other inserts are making more data to sync.... combined with the slow disk..... = problem
[21:45:07] <moleWork> it's just that the DataFileSync is so slow.... i never see it in my logs since it only ever happens in a blue moon
[21:46:07] <moleWork> it's a problem that causes more problems... leading this mess...
[21:46:32] <moleWork> okay... glad we talked that through :)
[21:51:50] <moleWork> had to come back....
[21:51:52] <moleWork> Tue May 13 21:49:45.263 [DataFileSync] flushing mmaps took 694525ms for 403 files
[21:52:03] <moleWork> no wonder i can't see it happen in the logs... takes a tiny bit to occur