PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 14th of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:43:19] <Havalx> TypeError: document must be an instance of dict, bson.son.SON, or other type that inherits from collections.MutableMapping
[03:44:05] <Havalx> when constructing documents for mongo; what is the acceptable way of formatting them?
[03:44:29] <Havalx> ex: d = {"x": 8}
[03:44:32] <Havalx> is this a correct document?
[03:44:49] <Havalx> or d = [{"x": 8]]
[03:45:12] <Havalx> does d need to be serialized?
[05:23:07] <mastyd> Hi I'm trying to do a query where I get all documents from a collection but group them by a specific field. Basically have position data and want to have an array of objects, each object has a key that equals the id and the value is an array of all the documents that matches that id. I'm a newbie at mongo and don't know how to format this query
[05:23:57] <mastyd> I tried this: https://gist.github.com/dmastylo/f0710ad5859e9e9ca28d but it only gives me back something like [ { _id: 1327 }, { _id: null } ]
[05:43:36] <mastyd> Anyone? This should be a pretty easy query but I can't wrap my head around how to do it
[09:13:07] <GitGud> hi
[09:14:54] <GitGud> is mongodb secure?
[09:20:13] <Derick> GitGud: I have no idea what that is supposed to mean. Can you clarify?
[09:20:26] <GitGud> Derick, sorry
[09:20:57] <GitGud> i meant to ask. if i store information into a mongoDB database, is there a way a hacker could access my database?
[09:21:07] <GitGud> i've heard of mySQL injections and stuff
[09:21:17] <GitGud> so I wanted to switch to a different one
[09:28:10] <msup> GitGud: every piece of software has vulnerabilities but you are speaking about SQL Injections that are not specifically tied to MySQL. If you don't handle properly query composition you are vulnerable. In mongo the same concept apply even if you don't use SQL language. Make sure to compose queries using APIs and void concatenate string with end user provided data
[09:28:59] <GitGud> msup, ok. thank you. i just dont want hackers to be able to enter my database up for grabs
[09:32:22] <msup> GitGud: I got it but even if mongo software is secure they could access your data in many other ways.
[09:33:05] <GitGud> yeah if the rest of the program is not secure
[09:33:18] <GitGud> then they could inject something and get data back
[09:40:40] <GitGud> msup, question. i'm not going to ask too much details of it. but for things like un/password pairings, does mongoDB has an authentication module that is secure?
[09:42:02] <GitGud> so like in my application, the app would send a request to mongoDB of a username and password, and the mongodb software would send back the data of that particular pairing if its right. i just wawnt to know if that is possible and that if mongodb can have this functionality and if that is secure as lets say oauth2 msup
[09:42:59] <msup> GitGud: I've never used mongodb authentication so I can't answer about it sorry
[09:43:08] <GitGud> alright thank u
[09:46:22] <msup> Hi guys, we encountered a serious performance problem while migrating a production replica from mongodb 2.4.5 to 2.6.9. Our replica is composed by 3 nodes + an hidden one. While doing a rolling upgrade we noticed that upgraded secondaries immediatly started to suffer of I/O issues. On MongoDB MMS (actually Cloud Manager) we also observed increased "Lock %" from 10 to 30/40, 'Background flush average' went from <1s to 15-20s and regarding 'journal stats', 'j
[09:46:22] <msup> ournaled mb' and 'writetodatafilesmb' increased by factor 3.
[09:46:23] <msup> Downgrading to mongo 2.4.5 temporary solved the problem but of course this is not a long term solution. We also tried to rebuild dataset from scratch with 2.6.9 binaries and I/O problems persist.
[09:46:23] <msup> Does anyone has had the same issue or can help us to troubleshoot the problem?
[09:59:47] <bogn> 2.6.x has usePowerOf2Sizes as default allocation strategy, maybe the paddingFactor of <2.6.x served your application better
[09:59:55] <bogn> do you grow documents after insertion?
[10:00:18] <bogn> http://docs.mongodb.org/manual/release-notes/2.6/#storage
[10:01:41] <bogn> http://docs.mongodb.org/v2.6/core/storage/#record-allocation-strategies
[10:03:03] <bogn> if you profile your database and have a lot of {moved: true} records that might be the reason for you I/O issues, moving documents on disk because they don't fit into their slot is costly
[10:03:33] <bogn> so msup: do you update and grow your documents after insertion?
[10:35:08] <msup> bogn: thank for your reply. We already checked for document moves..we have 2/3 per minute assuming mongodb stats are right since yesterday we found a potential issue in idhack stats
[12:54:41] <msup> hi guys..any other suggestions about the reported problem? " we encountered a serious performance problem while migrating a production replica from mongodb 2.4.5 to 2.6.9. Our replica is composed by 3 nodes + an hidden one. While doing a rolling upgrade we noticed that upgraded secondaries immediatly started to suffer of I/O issues. On MongoDB MMS (actually Cloud Manager) we also observed increased "Lock %" from 10 to 30/40, 'Background flush average' went
[12:54:41] <msup> from <1s to 15-20s and regarding 'journal stats', 'journaled mb' and 'writetodatafilesmb' increased by factor 3. Downgrading to mongo 2.4.5 temporary solved the problem but of course this is not a long term solution. We also tried to rebuild dataset from scratch with 2.6.9 binaries and I/O problems persists. Anyone had the same issue or can help us to troubleshoot the problem?"
[12:56:08] <einyx> updgrading to 3.0.4 is an option?
[12:56:32] <einyx> we went from 2.4 to 3.0.4 without issue
[13:02:52] <aps> Is WiredTiger better than MMAPv1?
[13:03:10] <StephenLynx> IMO, not yet, but will be very soon.
[13:03:13] <cheeser> yes
[13:03:22] <einyx> we use in prod, 3.0.4 is actually pretty good with it
[13:03:30] <cheeser> there are some issues for some users, but MMS uses it, e.g.
[13:03:33] <einyx> they did fixed a lot of issues
[13:04:21] <aps> okay
[13:04:38] <StephenLynx> I would wait until its the default to start using it.
[13:04:45] <StephenLynx> that should happen by 3.2
[13:04:51] <cheeser> it's the default as of 3.1.5
[13:04:54] <StephenLynx> ah
[13:05:14] <aps> 3.1.5 is not released yet, right?
[13:05:17] <msup> einyx: we also tried to migrate with binary drop-in to 3.0 but the I/O issues were the same
[13:05:22] <StephenLynx> right aps
[13:05:29] <cheeser> um. it's a dev release at best but i think it's "out"
[13:06:53] <einyx> msup: it might be related on how the app manages the data? have you checked that the library you use are updated and tested with newer version?
[13:07:10] <einyx> all mongo 2.4> is actually quite heavily rewritten afaik
[13:07:22] <einyx> sounds strange to me, at least with the php library, we didn't experience any problem at all
[13:07:43] <StephenLynx> the interface remained mostly the same.
[13:08:02] <StephenLynx> so drivers and such weren't too affected.
[13:08:05] <einyx> have you tried to compare the result of top() between versions?
[13:08:31] <einyx> StephenLynx: authentication changed for sure - is the only thing we had issue with really
[13:08:48] <einyx> but we don't know his app, so is a possibility :)
[13:09:06] <aps> Another question, how do you deal with increasing database size? Our current db is getting >650 GB..
[13:09:32] <cheeser> depends on the problems its giving you
[13:09:40] <cheeser> *it's* ...
[13:12:13] <aps> cheeser: Mainly that I need to increase the size of the volume mongo is using. But, what are the strategies to keep going as data size increases further?
[13:13:36] <msup> einyx: if you mean top unix shell command we saw an increased iowait. Also the output of "iostat -xmt 1" and iotop showed increased writes (3x the mongo 2.4 secondary). There are no document moves as reported by db.serverStatus() metrics
[13:13:57] <cheeser> well, wiredtiger is better about disk usage for starters.
[13:14:33] <cheeser> but if you have a replica set (shame on you if you don't :D) you could always round robin a repairDatabase run to release disk space.
[13:14:33] <einyx> msup: it might help debugging http://docs.mongodb.org/manual/reference/command/top/
[13:14:49] <einyx> uhm
[13:14:54] <cheeser> you'd have to tolerate some wobbliness during an election, though.
[13:16:38] <einyx> msup: and of course all the smart test pass for the hd's right? :P
[13:17:57] <jacksnipe> running into a frustrating problem. I've got a 2dsphere index on the users collection, but this query: db.users.find({ $nearSphere : { $geometry : {type : "Point",coordinates : [138.666, -34.6786] }} }) is giving me Error: error: { "$err" : "Can't canonicalize query: BadValue unknown top level operator: $nearSphere", "code" : 17287 }
[13:18:18] <cheeser> what version of the db?
[13:18:30] <aps> cheeser: okay, we do have a sharded cluster with replica sets :) What I'm thinking about is that what do I do when data keeps coming and disk size reaches the maximum capacity?
[13:18:30] <jacksnipe> hmmm crap good point
[13:18:33] <jacksnipe> how do I check that?
[13:18:44] <aps> mongo --version?
[13:19:03] <jacksnipe> 3.0.3
[13:19:09] <StephenLynx> or -v for short
[13:19:26] <StephenLynx> well, look at that
[13:19:28] <cheeser> where $nearSphere is should be a document field name
[13:19:31] <StephenLynx> mongo doesn't have -v :^)
[13:19:39] <jacksnipe> oh crap, thank you cheeser
[13:19:39] <aps> :P
[13:19:51] <jacksnipe> you just saved me like an hour
[13:20:20] <jacksnipe> lol
[13:28:45] <msup> einyx: we'll check top command output. We looked into mongotop shell command which looks like to print a summary on top used collection basics. We'll double check differences between secondaries. We checked disk status and all look fine
[13:37:15] <jacksnipe> deathanchor: you can build a 2dsphere on legacy coordinate pairs
[13:37:26] <einyx> msup: have you tried to force a repair?
[13:37:29] <jacksnipe> deathanchor: or you can write a tiny little script to migrate from one to the other
[13:37:42] <einyx> msup: or elect a new primary and see if makes a difference?
[13:38:29] <msup> einyx: we let mongo 2.6.9 to rebuild dataset from scratch and tried to change primary
[13:44:18] <deathanchor> jacksnipe: yeah I have to write up the converter :P more work for me. damn devs
[13:44:34] <cheeser> RAGE!
[14:16:10] <nixstt> if I add a new member to a replicaset does it still need to catch up with the oplog or only if you copy data from datadir directly?
[14:20:22] <deathanchor> nixstt: it always needs to catch up
[14:20:28] <deathanchor> unless you can stop time
[14:29:39] <nixstt> can somebody help me shed some light on this: http://pastebin.com/ZW2b6zqv
[14:45:18] <L0s3[R]> http://pastebin.com/WAdSYXUd how can I tell mongo that [1,2] and [2,1] are one??
[14:49:09] <nixstt> I have rollback data on a primary, can I just remove this data file if i don’t want to merge it?
[15:45:33] <svm_invictvs> Somebody tells me they added two phase commits to MongoDB
[15:45:45] <svm_invictvs> as in, to the engine itself, is that true? 'cause i can't find it anywhere
[15:45:51] <svm_invictvs> I think this someboyd was smoking crack when the thole me that.
[16:45:48] <s2013> is it possible to pass in an argument for limit that would mean no limit?
[16:45:53] <s2013> like limit(0)?
[16:47:31] <kali> usize::max_value() ?
[16:48:04] <kali> s2013: ^
[16:48:17] <s2013> hmm i think limit(0) worked
[16:48:36] <s2013> cause we expect a limit in the function.. so i want to set it to unlimited. i dont wnat to change much of the code
[16:55:02] <svm_invictvs> I'm going to go with smoking crack
[16:55:06] <svm_invictvs> because i can't find it anywhere
[17:01:22] <kali> s2013: ho man, i'm sorry, i mixed up two chan :)
[17:01:36] <s2013> its cool
[17:01:37] <kali> s2013: what i said makes no sense for mongodb
[17:25:17] <symbol> Would it be safe to say, that MongoDB is a perfectly acceptable choice for data that's relational? I've found myself caught up with the term "relational" and have read blogs saying not to use MongoDB where "relational" data is required. I'm beginning to to think that the word "relational" isn't the best for the scenario and in reality, it means "JOINS".
[17:25:36] <symbol> That might not make much sense...
[17:25:48] <StephenLynx> "Would it be safe to say, that MongoDB is a perfectly acceptable choice for data that's relational?"
[17:25:48] <StephenLynx> no
[17:26:11] <StephenLynx> it is acceptable, but not "perfectly"
[17:26:22] <StephenLynx> you won't have any guarantee your relations are sane.
[17:26:24] <StephenLynx> ever.
[17:26:27] <mike_edmr> it depends on access patterns
[17:29:25] <mike_edmr> i think the worst cases are where data needs to be grouped lots of different ways
[17:30:13] <mike_edmr> because then you either have to do substantial manipulation of documents to get get "normalized" data into the structure you want
[17:30:15] <StephenLynx> not necessarily because you have $group
[17:30:18] <mike_edmr> or
[17:30:31] <StephenLynx> so grouping is not an issue by itself.
[17:30:34] <mike_edmr> when you go to modify it you have to do redundant modification
[17:30:34] <symbol> Whoops - had to take the dog out
[17:30:49] <mike_edmr> StephenLynx: i didnt mean $group
[17:30:58] <StephenLynx> " needs to be grouped lots of different ways"
[17:31:11] <mike_edmr> yeah but you assumed i meant $group
[17:31:29] <StephenLynx> no, I assumed you meant grouping.
[17:31:29] <mike_edmr> its an english word too
[17:31:37] <mike_edmr> which doesnt have such a specific meaning
[17:31:41] <StephenLynx> grouping
[17:31:43] <StephenLynx> means grouping
[17:31:56] <mike_edmr> what i am talking about
[17:32:08] <StephenLynx> grouping
[17:32:09] <mike_edmr> is when a single attribute needs to appear in many different document structures
[17:32:13] <StephenLynx> ah
[17:32:23] <mike_edmr> that could be called
[17:32:28] <mike_edmr> grouping attributes differently
[17:32:30] <StephenLynx> you should've been more specific then.
[17:32:32] <mike_edmr> no
[17:32:38] <StephenLynx> and again
[17:32:42] <StephenLynx> $group can do that.
[17:33:31] <symbol> Hm - so I realize I can't guarnatee sane relationships but I've yet to work with data that doesn't involve relationships.
[17:33:54] <StephenLynx> you can have relations.
[17:34:01] <StephenLynx> but they will mean jack on the database.
[17:34:08] <symbol> I'd just need to be OK with not having mult-document atomicity?
[17:34:28] <mike_edmr> yes you need to really be ok with it
[17:34:35] <StephenLynx> yeah
[17:34:41] <StephenLynx> A-OK
[17:35:06] <symbol> I can't imagine relations with eventual consistency...whew.
[17:35:32] <symbol> Which, isn't a default unless I allow the application to read from secondaries.
[17:36:07] <StephenLynx> that is why if your data is extremely relational and depend on said relation
[17:36:11] <StephenLynx> you use a relational database
[17:36:32] <symbol> Would you consider blog post -> author to be extremely relational?
[17:36:44] <mike_edmr> no
[17:37:08] <symbol> Can I get an example of an extreme? I tihnk that might be the catch in my mind.
[17:37:18] <mike_edmr> you cant measure "how relational" something is to get an idea of whether mongo is a good use
[17:37:21] <mike_edmr> its
[17:37:25] <mike_edmr> how often are you going to change that relationship?
[17:37:58] <symbol> That makes sense. You'll have to disregard my bad semantics.
[17:39:24] <symbol> I've watched a majority of the MongoDB webinars on this topic, read their blog posts, and lived in Stack Overflow but I just haven't been able to see the fine line yet.
[17:40:26] <mike_edmr> even with an application that is not 'well suited' to mongo, i think if you're careful you could make it work great
[17:40:37] <mike_edmr> but there might be a higher tendency to fall into some antipatterns
[17:40:49] <symbol> That's sort of what I picked up.
[17:41:14] <symbol> It makes sense that the webinar folk assert that Mongo can work for almost anything from a marketing standpoint.
[17:41:20] <mike_edmr> where you have to modify multiple documents or do complicated queries because of not planning the structure out well
[17:41:30] <mike_edmr> yeah
[17:44:04] <symbol> Thanks for the comments - I don't have many devs around me who know much about NoSQL or MongoDB. It's nice to hash out the confusing thoughts.
[18:03:59] <diegoaguilar> Hello, I got some node applications running with sails-mongo adapter
[18:04:04] <diegoaguilar> which finally uses mongodb module
[18:08:09] <GitGud> >finally
[18:21:43] <nixstt> one of my secondaries is stuck in rollback state what can I do to get it back up?
[18:27:26] <diegoaguilar> Have someone here got experience with mongolab?
[18:27:50] <diegoaguilar> or mongo backups in general?
[19:58:37] <mastyd> I'm going to be saving position data for players in a game in Mongo. I'm not sure if I should go with a sub-document model or a referential model. I'm going to be storing anywhere from 800-2000 positions for a player. Any tips?
[19:59:59] <mastyd> The main queries I'll be doing on the position data is getting all the positions to do distance calculations, or simply the last position. Nothing complex
[20:02:16] <mastyd> So I assume embedded is better?
[20:56:10] <jonnymcc> hello
[20:56:36] <jonnymcc> does anyone know in mongodb 3.0 if I can change a user password across all collections?
[20:56:56] <StephenLynx> noo
[20:57:58] <jonnymcc> so I would have to do use db; db.changeUserPassword('a', 'b'); use db1; db.changeUserPassword('a','b')?
[22:40:52] <pokEarl> Wait what, why is this channel not showing up in the channel list?
[22:42:23] <StephenLynx> illuminati
[22:47:47] <toastedpenguin> I have 1 mongodb, in the db there are 2 collections, one I created "batch"and then system.indexes, I set the batch collection to be capped at a size less than the available disk space, however I just discovered the disk in quesiton ran out of space and the mongodb data is the only thing on this disk
[22:48:11] <toastedpenguin> looking for reasons I still ran out of space and how to prevent it in the future
[22:50:24] <toastedpenguin> here is the output of db.stats() http://pastebin.com/c5n1Pvf8
[22:50:44] <toastedpenguin> it shows 3 collections but when I run show collections I only see the 2 I referenced
[22:51:04] <toastedpenguin> current size of the db is 219G
[23:12:45] <polydaic> what is the best way to debug a lock/io error
[23:31:08] <droid909> hi
[23:31:59] <droid909> am i on an old page http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/ here it says that Currently packages are only available for Debian 7 (Wheezy).
[23:32:07] <droid909> what should i do with my debian 8.1 ?