PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 28th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:11:15] <movedx> ": Failed to start SYSV: Mongo is a scalable, document-oriented database.." - useful .
[02:51:40] <pEYEd> where is the listening socket for mongodb located? I want to access my mongo via remote gui admin tool.
[02:58:48] <cheeser> you mean what port does it listen on?
[03:02:31] <pEYEd> cheeser: where do I configure it to listen to an IP and not localhost.
[03:02:55] <pEYEd> my mongodb.conf os blank
[03:03:01] <pEYEd> is
[03:06:17] <cheeser> http://docs.mongodb.org/manual/reference/configuration-options/
[03:09:54] <pEYEd> yaml, the red heaed stepchild of html -> YAML does not support tab characters for indention: use spaces instead. o.0
[03:12:10] <cheeser> except for it being unrelated to html
[03:13:43] <movedx> And having a completely unrelated goal.
[11:18:53] <antiPoP> I have a documents with several properties: a, b ,c, d ... how can I ges distinct items by two keys (something like distinct(a, b))
[11:19:36] <antiPoP> I'm googling for it, and trying the aggregation framework, which seems a a little bit complex...
[11:43:52] <deathanchor> antiPoP: wouldn't it be easier to just to run two distincts and sort/uniq that?
[11:44:18] <antiPoP> no, as it's like a compound key
[11:44:25] <antiPoP> deathanchor,
[11:44:27] <deathanchor> ah
[11:44:45] <deathanchor> so compound distinct of two fields?
[11:45:07] <deathanchor> yeah you'll have to use aggregation for that
[11:45:45] <deathanchor> if you have a gist I could help you fix what you have
[11:46:05] <antiPoP> deathanchor, https://gist.github.com/antiPoP/75eadf30012b9528935f
[11:46:24] <antiPoP> it's on mongosee but I think the syntax is the same
[11:47:36] <antiPoP> deathanchor, I updated it with the complete schema https://gist.github.com/antiPoP/75eadf30012b9528935f
[11:48:14] <deathanchor> but you don't need the project
[11:48:23] <deathanchor> just the group should be enough
[11:48:40] <antiPoP> yes, but it give me a weird output result
[11:49:06] <deathanchor> so aggregate returns a json doc and one of the fields is result
[11:49:17] <deathanchor> result is an array which you can loop through
[11:49:29] <deathanchor> limitation is that doc can't be more than 16MB
[11:49:41] <deathanchor> if it is you need to use aggreagation with a cursor
[11:49:46] <antiPoP> well, that' not an issue from me
[11:50:10] <antiPoP> but I was trying to simplify the output: here is a sample of what I get https://gist.github.com/antiPoP/75eadf30012b9528935f#gistcomment-1583562
[11:50:22] <deathanchor> yep
[11:50:30] <antiPoP> as I want to populate tehse fileds later, and send the reply via json
[11:50:31] <deathanchor> looks right, what's the issue?
[11:50:45] <antiPoP> I want to remove one json level
[11:51:00] <antiPoP> so I get an array of keys directly
[11:51:35] <deathanchor> array of keys?
[11:51:37] <antiPoP> so I want [{... }] instead [{_id: {... }}]
[11:51:57] <deathanchor> hmm.. let me try something.
[11:52:09] <antiPoP> [{... }, {... }] instead [{_id: {... }}, {_id: {... }}]
[11:52:21] <deathanchor> what happens if you leave that project line in the pipeline?
[11:53:37] <antiPoP> here is the output https://gist.github.com/antiPoP/75eadf30012b9528935f#gistcomment-1583567
[11:55:56] <deathanchor> ah I got it
[11:57:01] <antiPoP> basically I need a new result object instead appending fields like now
[11:57:01] <deathanchor> see latest comment
[11:57:30] <deathanchor> use that new project line and you'll be golden
[11:58:21] <antiPoP> works thanks :)
[11:58:24] <antiPoP> btw
[11:58:29] <deathanchor> yeah _id is special
[11:58:46] <antiPoP> if this is a long list of messages, it's better to embed fields, o populate?
[12:01:00] <antiPoP> this is a list of messages, which is basically keys. but I need to get fields from other documents associated to these tables via ref
[12:01:09] <antiPoP> so I was going to populate tehse ids
[12:01:23] <antiPoP> but I was thinking if it woudl be more efficient to store these in the table
[12:01:29] <antiPoP> messages table
[12:02:08] <antiPoP> I can paste the other schemas if required
[12:02:33] <deathanchor> antiPoP: that's all design questions for your schema/model and to what you want the app does. I'm not awake enough for those kinds of questions today :)
[12:02:53] <antiPoP> ok,thanks anyway :=
[12:02:55] <antiPoP> :)
[12:14:20] <antiPoP> deathanchor, the issue is that my actual code seems a littl bit excesive: https://gist.github.com/antiPoP/2cb59af0764b8c21b2dc
[12:14:26] <antiPoP> or is that fine?
[12:30:27] <antiPoP> I the latest code, how can I also get the id of the latest message?
[12:30:30] <antiPoP> in*
[12:31:50] <antiPoP> $last?
[12:44:36] <antiPoP> here is my updated try https://gist.github.com/antiPoP/2cb59af0764b8c21b2dc
[12:45:12] <antiPoP> I'm trying to group by two fields, and get the max id of that group, but I get duplicated groups
[12:46:37] <antiPoP> mm it's working fine, sorry :)
[12:46:49] <antiPoP> the agreagation seems funny :D
[15:02:10] <topwobble> The update() command does not let you specify a limit?
[15:02:56] <StephenLynx> no.
[15:03:00] <StephenLynx> and why would you want that?
[15:03:14] <StephenLynx> for that to work, there would have to take a sort argument too
[15:04:02] <topwobble> I want to do a rolling update on a large collection. Prefer to limit the # of updates per minute so I dont trash the working set too bad
[15:04:28] <topwobble> no need to sort, just use $exists:true on the param that we add in the update
[15:05:25] <StephenLynx> you'll need to operations per request.
[15:05:28] <StephenLynx> two*
[15:05:37] <StephenLynx> a find and an update.
[15:06:15] <StephenLynx> you could reduce the application work by using an aggregate to group the ids.
[15:08:02] <topwobble> update() with a limit would be a lot nicer on the DB
[15:08:09] <StephenLynx> again
[15:08:13] <topwobble> find() will cause more to be pulled into memory on the Mongo server
[15:08:20] <StephenLynx> that would require a sort to work.
[15:08:30] <topwobble> agree to disagree ;)
[15:08:34] <StephenLynx> hurrrrrrrrrrrrrr
[15:08:43] <topwobble> doesnt need a sort if you query on the field you add in the update()
[15:08:45] <StephenLynx> ok, genius, riddle me that:
[15:09:02] <StephenLynx> you are incrementing a field.
[15:09:07] <topwobble> im adding a new field
[15:09:10] <StephenLynx> YOU
[15:09:11] <topwobble> as I already told u
[15:09:14] <StephenLynx> the world doesn't revolve around you.
[15:09:40] <StephenLynx> it makes absolutely ZERO sense to add a limit on the update operation.
[15:09:49] <StephenLynx> because it would require so much stuff to work.
[15:10:18] <StephenLynx> every wondered why theres no limit on updates on databases?
[15:10:31] <StephenLynx> I am almost sure sql dbs don't have them either.
[15:11:24] <StephenLynx> just checked, they don't.
[15:11:56] <topwobble> do you work for mongodb?
[15:12:00] <StephenLynx> no.
[15:21:17] <deathanchor> findandmodify()?
[15:21:32] <StephenLynx> is not what he needs.
[15:21:42] <StephenLynx> he wants to throttle the update.
[15:21:55] <deathanchor> ah, that requires application layer
[15:22:01] <StephenLynx> thats what I told him.
[15:22:27] <StephenLynx> it flipped my shit when he said "agree to disagree"
[15:22:50] <topwobble> "flipped my shit" === "you acted like an asshole" ;)
[15:22:58] <StephenLynx> yeah, I did.
[15:23:05] <StephenLynx> I do that with idiotic arguments.
[15:23:12] <StephenLynx> and I would do everything again.
[15:23:28] <topwobble> god youre terrible
[15:24:01] <StephenLynx> :3
[15:24:03] <topwobble> asking questions != idiotic. it's part of learning. You're a terrible programmer if you don't know that
[15:24:09] <StephenLynx> no, is not asking a question;
[15:24:21] <StephenLynx> is being a stubborn and using bullshit to defend your bullshit.
[15:24:42] <StephenLynx> you asked if there was a limit, I told you there wasn't and explained why it wouldn't work.
[15:25:01] <StephenLynx> you insisted and kept using your use case as the only one that would have to be covered.
[15:25:25] <StephenLynx> and in the end just threw a "agree to disagree" like we are discussing god damned popsicle flavors.
[15:26:06] <topwobble> no, i used a concrete example.
[15:26:25] <topwobble> explaining why update().limit() would be useful
[15:26:45] <topwobble> Feel free to DM me if you want to berate me some more but quit trolling this channel as I'm sure nobody wants to hear your ranting
[15:26:48] <StephenLynx> and I used another one explaining why it would require a sort, but then you said "I don't need that" like it was a case that wouldn't have to feel covered.
[15:27:01] <StephenLynx> and no, I won't PM you for an argument had in a channel.
[15:28:11] <StephenLynx> and disagreeing is not "trolling".
[15:28:35] <StephenLynx> my very first line aimed towards you was a direct and simple answer to your question.
[16:10:07] <Bookwormser> Hey all. I recently upgraded MongoDB from 2.6.9 to 3.0.6 on Ubuntu 12.04. Is there a way to start it as a daemon instead of using the command 'mongod --storageEngine wiredTiger --dbpath /my/path/'?
[16:10:17] <Bookwormser> service mongob start fails
[16:10:27] <Derick> mongob?
[16:10:41] <Bookwormser> Haha sorry, mongod
[16:10:45] <StephenLynx> i think its just mongo
[16:10:55] <Derick> no, it got changed to mongodb IIRC
[16:10:59] <StephenLynx> hm
[16:11:11] <StephenLynx> what is the error it gives you?
[16:11:13] <Derick> I have both "d" and "db"
[16:11:47] <Derick> Bookwormser: have you previously used it without wiredTiger and the same path?
[16:11:53] <Derick> Bookwormser: I'd suggest you check the mongod log file
[16:12:48] <Bookwormser> I changed the path from /var/lib/mongodb/ to /var/lib/mongodb3/
[16:13:23] <Bookwormser> ah permission denied...
[16:13:54] <StephenLynx> the user it is using to start the daemon might not have permission to run the executable
[16:15:38] <Bookwormser> That is exactly what it was. All set!
[16:15:40] <Bookwormser> Thank you
[16:22:17] <daidoji> hello, is there a yum package for mongo client with ssl included for client tools and shell?
[16:22:29] <daidoji> or do i have to build those as well? for mongo 2.6
[16:23:06] <StephenLynx> did you check the 10gen packages?
[16:23:22] <StephenLynx> afaik, those allows you to install older versions.
[16:30:12] <daidoji1> whoops
[16:30:25] <daidoji1> StephenLynx: yeah the problem is I'm on an older version that doesn't have SSL support in the client
[16:30:32] <StephenLynx> welp
[16:30:41] <StephenLynx> I guess you'll have to build from source then.
[16:30:45] <daidoji1> :-(
[16:30:48] <StephenLynx> but if you are just interested in the client
[16:30:57] <StephenLynx> can't a newer client connect to an older db?
[16:31:10] <daidoji1> StephenLynx: I'm not sure
[16:31:22] <daidoji1> I think it can but I've run into trouble before doing that
[16:31:28] <StephenLynx> welp
[16:31:46] <daidoji1> oh well this is annoying
[16:48:19] <svm_invictvs> So...
[16:48:28] <svm_invictvs> WHy does the MongoDB driver ignore the BSON length?
[16:49:41] <daidoji> svm_invictvs: what do you mean?
[16:49:54] <svm_invictvs> So a BSON document starts with a length field.
[16:50:42] <StephenLynx> which driver?
[16:51:01] <svm_invictvs> I'm reading this
[16:51:02] <svm_invictvs> http://www.michel-kraemer.com/binary-json-with-bson4jackson
[17:13:59] <Bradipo> Hello, I've never seen this before, but it just started happening (Invalid BSONObj): http://pastebin.com/aPfnTtWb
[17:14:43] <Bradipo> Search engines turned up some matches about using the database profiler, but I have disabled that.
[17:21:37] <Bradipo> Ok, it looks like the partition that mongodb was on got full...
[17:21:49] <StephenLynx> :v
[17:21:53] <Bradipo> Do I need to rebuild the database, or will it recover itself?
[17:29:35] <daidoji> Bradipo: did you get a clean shutdown?
[17:30:05] <Bradipo> I haven't tried yet.
[17:30:11] <Bradipo> I just found it in this state this morning.
[17:30:29] <Bradipo> Some queries to our DB were hanging, and I saw the invalid BSONObj error.
[17:30:40] <Bradipo> Should I try to shutdown?
[17:31:37] <daidoji> Bradipo: see if you can get a clean shutdown
[17:31:42] <daidoji> then extend teh partition
[17:31:50] <daidoji> then you shouldn't have to rebuild
[17:31:56] <Bradipo> I'm actually deleting a bunch of old files that no longer need to be on the partition.
[17:32:03] <Bradipo> So I've freed up 200GB of space so far.
[17:32:17] <daidoji> and you still get the error?
[17:32:30] <Bradipo> Well, actually, I haven't tried since I cleared space.
[17:33:50] <Bradipo> Ok, yes, I still get the invalid BSONObj error.
[17:33:55] <Bradipo> So I guess I'll try shutdown.
[17:34:09] <Bradipo> Is that just with the standard init script?
[17:34:17] <Bradipo> e.g. /etc/init.d/mongodb stop (or whatever).
[17:59:05] <Bradipo> So, after clean shutdown I still get the BSONObj invalid error...
[17:59:08] <Bradipo> I guess this means I need to rebuild?
[18:39:40] <daidoji> Bradipo: yeah I guess so
[18:40:47] <Bradipo> Guess we'll see...
[18:44:51] <daidoji> ganbatte
[19:08:43] <pjz> is it legal to write a selector against the text search score?
[19:09:52] <pjz> like: { $and: [ { $text: {$search: "mysearch"} }, { score: { $gt: 0.5 }} ] }, { fields: { score: { $meta: "textScore" }}} ?
[19:53:27] <pjz> anyone?
[20:18:07] <tejasmanohar> db.getCollection('pres').find({phoneNumber: { $text: { search: "+1603" } }})
[20:18:28] <tejasmanohar> is there something wrong with this query?
[20:25:38] <CSWookie> Hey all, I'm putting together an automated test runner monitor/scheduler and it's using mongodb. I've got a heterogenous, nested structure representing test files and groups of tests files and groups of groups. It's looking like I'm going to end up with lots of copies of the same data all over the place. In SQL land, I'd say that's repulsive and get everything split up, but the resident mongodb advocate here says that's all OK. C
[20:25:45] <CSWookie> perspective?
[20:32:38] <StephenLynx> depends on how much data you are going to duplicate.
[20:32:49] <StephenLynx> and what would be the alternatives.
[20:33:32] <StephenLynx> but as a concept, it is acceptable to circumvent mongo's design boundaries.
[20:37:40] <CSWookie> I don't understand what you mean by that last, about "circumvent mongo's design boundaries". As far as alternatives, I'm inclined to make a collection for Groups, and a collection for Tests, and use a lot of ids. But if I'm gonna do this thing with mongodb, I want to be as idiomatic about it as possible.
[20:46:22] <StephenLynx> so you are considering fake relations? that is acceptable too.
[20:46:39] <StephenLynx> which one is better depends on how you are going to write and read on the collection.
[20:46:52] <StephenLynx> each approach performs better under certain scenarios.
[20:47:06] <StephenLynx> usually I like to avoid nesting things too deeply.
[20:47:17] <StephenLynx> because the locks start to hit it real bad.
[20:47:29] <StephenLynx> and depending on the amount of data, it might exceed the 16mb
[20:48:19] <CSWookie> I currently have an automated system running, my first benchmark for this system, is to set things up so that my system sends messages about it's progress on the tests so that users can easily see how far along they are, what has failed, what has passed, ets.
[20:48:50] <CSWookie> I'd also like to be able to query how long it took for tests to pass.
[20:49:05] <StephenLynx> you can use dates as values.
[20:49:43] <CSWookie> At some point in the future, I'd like to move away from using the current scheduling, and let this system handle scheduling so that we can be more distributed in the running of our tests, as they would be easily paralleizable in many cases, and can take a long time.
[20:49:46] <StephenLynx> when deciding between fake relations and nesting you have two things to consider:
[20:49:52] <StephenLynx> 1: write locks on collections
[20:50:13] <StephenLynx> 2: limits on how you will be able to query and project nested arrays and documents
[20:50:16] <StephenLynx> specially arrays
[20:50:43] <StephenLynx> 3: how many queries you would have to make in the case of the fake relations
[20:51:02] <StephenLynx> actually, those were 3 :v
[20:51:24] <StephenLynx> the point is, there isn't exactly an idiomatic approach to adding relations.
[20:51:59] <StephenLynx> I can think only of nesting, but even then, this is not the best approach for 100% of the cases.
[20:52:08] <StephenLynx> dbrefs operate on a driver level
[20:52:45] <StephenLynx> so there isn't exactly an answer that I can give to you with the information I have of your scenario.
[20:53:31] <CSWookie> What sort of information would help you give me an answer?
[20:54:24] <CSWookie> I'm not trying to be coy, but I don't fully understand what you're saying with your questions.
[20:55:07] <StephenLynx> what is your model/schema?
[20:55:33] <StephenLynx> how do you expect for your application to query your database?
[20:56:03] <StephenLynx> how many operations are expected per second?
[20:58:09] <CSWookie> I don't currently have a schema, that's what I'm designing now. I expect my application will be sending http posts with json messages to the meteor server. Not a whole lot of load, it's an internal tool.
[20:58:53] <CSWookie> There will also be a web page users can use to view the progress of things, which will be based on meteor.
[20:59:17] <StephenLynx> I think you could use multiple collections there.
[20:59:18] <StephenLynx> IMO
[20:59:20] <StephenLynx> gotta go
[21:15:42] <duck_tape> I have a entry "sharding.configDB=11.22.101.1:9999" in my mongos conf and when I try and start mongos I get the following error any ideas?
[21:15:52] <duck_tape> mongos -f ../mongo-config/S.conf
[21:15:52] <duck_tape> Error parsing INI config file: unknown option sharding.configDB
[21:15:52] <duck_tape> try 'mongos --help' for more information
[21:30:21] <duck_tape> I have a entry "sharding.configDB=11.22.101.1:9999" in my mongos conf and when I try and start mongos I get the following error any ideas?
[21:30:21] <duck_tape> mongos -f ../mongo-config/S.conf
[21:30:21] <duck_tape> Error parsing INI config file: unknown option sharding.configDB
[21:30:21] <duck_tape> try 'mongos --help' for more information
[23:34:02] <fedora_newb> I am trying to install/run mongodb but everytime I start the service, it is stopping when I check the status.
[23:35:04] <fedora_newb> http://pastebin.com/mq79JipL fromthe log
[23:35:43] <joannac> fedora_newb: okay. did you actually read that?
[23:36:03] <joannac> specifically the part that says "need to upgrade ... run --upgrade to upgrade dbs, then start again"?
[23:36:05] <fedora_newb> joannac, I tried mongod --upgrade
[23:36:16] <fedora_newb> And still does the same thing
[23:36:27] <joannac> show me the output of when you ran with upgrade
[23:37:38] <fedora_newb> http://pastebin.com/TL5Fi0sb
[23:37:47] <fedora_newb> joannac ^
[23:38:14] <joannac> you need to give it a dbpath
[23:38:21] <joannac> it's upgrading the files at /data/db
[23:38:27] <joannac> which is not what you're using
[23:38:52] <fedora_newb> Well, its a fresh install and I did create the dir at /data/db
[23:39:36] <joannac> look at your first set of logs
[23:39:38] <joannac> dbpath: "/var/lib/mongodb"
[23:41:20] <fedora_newb> Sorry, not sure what I need to do here
[23:42:41] <fedora_newb> joannac, dbpath is set to /var/lib/mongodb and the path exist?
[23:43:04] <joannac> mongod --upgrade --dbpath /var/lib/mongodb
[23:44:41] <fedora_newb> joannac, http://pastebin.com/MB8nG9hh
[23:45:33] <joannac> yes, that's a successful upgrade?
[23:46:52] <fedora_newb> joannac, http://pastebin.com/cnPa05TF not sure if it upgraded correctly
[23:46:58] <fedora_newb> Still stopping after starting
[23:47:17] <joannac> fedora_newb: do you actually read the logs before pastebinning them?
[23:47:37] <joannac> like, mongod logs are not always easy to read, but the problem here is actually quite clearly stated
[23:47:42] <joannac> Mon Sep 28 19:40:03.383 [initandlisten] couldn't open /var/lib/mongodb/local.ns errno:13 Permission denied
[23:47:53] <fedora_newb> Right, but I am not sure why that is, I ran with sudo?
[23:48:02] <joannac> ran what with sudo?
[23:48:07] <fedora_newb> mongod --upgrade
[23:48:13] <joannac> yeah
[23:48:21] <fedora_newb> So wouldn't that give the permission needed?/
[23:48:21] <joannac> and you ran your start script without sudo?
[23:48:31] <joannac> what does your start script run as?
[23:48:48] <fedora_newb> sudo service mongod start
[23:49:01] <joannac> okay, abnd what does that start mongod as?
[23:49:13] <joannac> probably the user "mongodb" and not as "root"
[23:49:57] <fedora_newb> Ok, still not sure what I need to do here?
[23:50:56] <joannac> chown your files to belong to whatever the correct user is
[23:51:39] <fedora_newb> btw, looking in the directory, I see that there is no local.ns file
[23:51:50] <fedora_newb> sorry, nvm
[23:55:21] <fedora_newb> joannac, thanks, finally got it running. sorry for newb questions there...was letting the new service overwhelm me a bit :) Do appreciate your help