PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 10th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[12:43:14] <blz> Hello, I'm trying to seed a database with mongoimport, but I'm getting an exception wrt the bson object size being too large. This is surprising, as none of my objects are above teh 16mb limit. https://paste.ubuntu.com/15734094/
[12:43:30] <blz> What can I do?
[12:43:57] <Mixxit> why cant i search UuidLegacy by string
[12:48:03] <Mixxit> screw this back to sql ill try it later
[12:51:09] <kurushiyama> blz Curious.
[12:52:46] <blz> kurushiyama: I know, right? I'm surely doing something stupid. Here's my import command: mongoimport --host mongodb --collection item --db antidox --file /tmp/example.json
[12:55:12] <kurushiyama> Hm, there was a problem when the max object size was applied to the original JSON docs instead of the resulting BSON docs.
[12:55:29] <kurushiyama> But that was fixed in like the stone age, iir.
[12:55:51] <blz> kurushiyama: yes, and there's no way I'm hitting that limit
[12:55:52] <blz> :/
[12:56:19] <kurushiyama> How was the JSON created? Not that we whave a "runaway" here.
[12:57:58] <kurushiyama> blz ^
[12:58:49] <kurushiyama> blz And may I ask which exact version you are using?
[12:58:50] <blz> kurushiyama: with mongo's export function
[12:59:59] <blz> kurushiyama: i'm evaluating the presence of my items with `use <dbname>` and `db.<collectionname>.count()` There's no mistake there, right?
[13:01:10] <kurushiyama> blz It depends on wether you have a sharded environment or not. There is a bug which _may_ cause count to return a wrong document count in sharded environments.
[13:01:34] <blz> kurushiyama: can you elaborate on what you mean by "shared environment" ?
[13:01:55] <kurushiyama> blz blz Sorry SharDed"
[13:02:11] <blz> ah! The this is most definitely _not_ the problem :)
[13:02:34] <blz> Right now I'm running a mongod instance locally, with defaults
[13:02:41] <kurushiyama> Which version?
[13:03:00] <kurushiyama> oh, wait
[13:03:00] <blz> kurushiyama: db version v3.2.4
[13:04:45] <kurushiyama> blz could you leave out the --file param and just use the path as a positional parameter?
[13:04:57] <blz> kurushiyama: I've tried that as well
[13:05:28] <blz> kurushiyama: latest attempt is `mongoimport --db antidox -c item example.json`
[13:06:07] <kurushiyama> blz Just want to rule out things. ;)
[13:06:41] <blz> kurushiyama: I definitely appreciate the sanity-checking! :)
[13:06:57] <blz> kurushiyama: I'm often insane with such things ;)
[13:07:00] <kurushiyama> Curious and curiouser.
[13:07:29] <kurushiyama> blz Did you do that as a backup?
[13:07:37] <blz> the export?
[13:08:07] <blz> kurushiyama: btw here's the full log output https://paste.ubuntu.com/15734700/
[13:10:25] <kurushiyama> blz Yes, the eport. And what size does the export have?
[13:10:57] <blz> kurushiyama: 308.3 mb
[13:11:35] <blz> And I'm about 99% sure it was exported with default options. My colleague is currently asleep (EST time zone) so I can't confirm :/
[13:11:43] <kurushiyama> blz Have you modified the export as part of a ETL procedure?
[13:12:06] <blz> kurushiyama: not that I know of? I'm not sure what ETL procedure means
[13:12:16] <blz> kurushiyama: not that I know of? I'm not sure what ETL procedure means
[13:12:46] <blz> ah, extract/transfer/load. Again I'm 99% sure that it was done with default settings. I guess that's something I should clarify
[13:12:57] <blz> I should be able to find out within an hour or so
[13:26:33] <Siegfried> Anyone familiar with mongoose?
[13:29:17] <Siegfried> lol
[13:29:38] <Siegfried> Learning mongoose for the first time
[13:29:55] <kurushiyama> Siegfried All I can say about it that I see it as conceptually wrong.
[13:30:13] <Siegfried> kurushiyama: I'm wondering why it adds an s to create a collection name when I use mongoose.model ?
[13:30:17] <Siegfried> That seems...bad
[13:30:27] <Siegfried> for instance, user becomes users, dish becomes dishes?
[13:30:32] <Siegfried> What happens if they get their grammar wrong?
[13:30:59] <Siegfried> I have to imagine there are thousands of edge cases where the language is weird
[13:31:01] <kurushiyama> Siegfried Well, it should not matter to you in case you stay within the mongoose tracks, should it?
[13:31:16] <Siegfried> kurushiyama: What do you mean?
[13:31:39] <kurushiyama> Siegfried A collection holding users might even be called "pizzaToppings" as long as mongoose gets it sorted out, no?
[13:31:59] <Siegfried> kurushiyama: For sure, I'm just wondering because mongoose automatically creates the collection name based on a string you give it?
[13:32:24] <Siegfried> to me this is bizarre: mongoose.model('user', userSchema) creates a collection called 'users'
[13:32:35] <Siegfried> Why not just have the first function parameter be the name of the collection in the first place?
[13:32:39] <kurushiyama> Siegfried Well, a design decision, and not the best one, imho. Can't you name them manually?
[13:32:48] <Siegfried> kurushiyama: Not sure
[13:33:07] <kurushiyama> Siegfried http://stackoverflow.com/a/35215407/1296707
[13:33:31] <kurushiyama> Siegfried But there are more subtle problems with a much larger impact, imho
[13:33:34] <Siegfried> kurushiyama: Oh sweet thank you
[13:33:45] <Siegfried> kurushiyama: I wouldn't know - literally started reading about mongoose an hour ago
[13:34:21] <kurushiyama> Siegfried The problem is that mongoose treats MongoDB as sort of an object store and sort of enforces this approach on you.
[13:34:38] <Siegfried> kurushiyama: Isn't that what mongodb is?
[13:34:48] <Siegfried> I've been seeing it as basically a collection of json files
[13:34:49] <kurushiyama> Siegfried No, not at all.
[13:35:10] <Siegfried> I guess I'm still at the novice level
[13:35:50] <kurushiyama> Siegfried Well, BSON is not JSON (which is a less subtle difference than people think). And MongoDB is a document oriented database.
[13:36:37] <Siegfried> kurushiyama: I get that
[13:36:49] <kurushiyama> While it is true that JSON represents objects, treating MongoDB as an object store takes away many of its advantages.
[13:37:04] <Siegfried> I see
[13:37:29] <Siegfried> Can you help me understand this real quick
[13:37:29] <kurushiyama> Siegfried Let us take a relatively simple example of a social network
[13:38:21] <kurushiyama> Siegfried When treating MongoDB as an object store, you'd have followings either stored on the followed user or the user following, right?
[13:38:29] <Siegfried> yes
[13:39:22] <kurushiyama> Siegfried Well, and here is where the problem starts. But let us take a step further and add "likes", "posts" and alike into the mix.
[13:40:21] <kurushiyama> Siegfried When you include all that into a user model (as an object store would dictate), you will run into several problems, not the least being the 16MB size limit of BSON docs.
[13:40:38] <Siegfried> That makes sense
[13:41:32] <kurushiyama> Siegfried I did not do much with mongoose, just playing around with it, but that was only the _first_ problem I had with it – and the last, admittedly.
[13:42:52] <kurushiyama> Siegfried And, as far as I have seen it so far, mongoose actually encourages overembedding: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/
[13:43:36] <kurushiyama> Siegfried So my suggestion would be to learn MongoDB. Raw. On the shell.
[13:44:01] <kurushiyama> Siegfried no fancy GUI tool.
[13:44:20] <Siegfried> kurushiyama: That's what I'm trying to do, but only started two days ago :D
[13:44:35] <kurushiyama> Siegfried https://docs.mongodb.org/manual/applications/data-models-tree-structures/
[13:44:42] <Siegfried> Main problem is I am following a coursera course and they only did a brief mongo introduction before focusing on mongoose
[13:44:46] <Siegfried> Thanks, will check out the links
[13:45:04] <kurushiyama> Siegfried https://docs.mongodb.org/manual/core/data-modeling-introduction/
[13:45:39] <kurushiyama> Siegfried Plus the basic crud operations. That should do it for starters ;)
[13:46:50] <Siegfried> Thanks very much :)
[13:47:09] <Siegfried> I have done mostly frontend stuff but am trying to learn backend too :D
[13:47:37] <kurushiyama> Siegfried Hm, how to put it polite. If an introductionary course relies on an ODM considered flawed even by you, that should tell you something about wether it is worth to follow this class ;)
[13:48:26] <Siegfried> kurushiyama: That is a good point, and will definitely do a lot of self reading. I just like to follow courses because they are a way for me to make sure I am making steady progress
[16:19:13] <zsoc> If I can reiterate my question. I'm having this odd issue where i can only login with a user/pass with my application running on the device the user/pass was registered on. The db is shared and accessed remotely. I'm using passport/mongoose/express4/nodejs, as a'local' strategy for passport.
[16:19:59] <zsoc> So If i run the application locally and create user1, and then run the application on the server where the db is located physically... and create user 2 - I can only successfully login with user 1 while running local and user 2 while running remote... even tho the DB clearly shows both users with their hashes/salts.
[16:23:20] <zsoc> Does this have something to do with how hashes/salts work in reference to the env the application is running on? Maybe this is intended behavior...
[16:24:26] <zsoc> I know you can adjust things like salt length, iterations, and key length.. but these are all default. Unless there's some randomization in the way the libs are installed through npm?
[16:30:30] <zsoc> Alright, I guess if the versions of passport/mongoose or their wrappers were different, than the digest algos might be different. That would cause the issue.
[16:42:55] <StephenLynx> also, mongoose is really bad, zsoc
[16:43:02] <StephenLynx> you will be better if you don`t use it.
[16:44:42] <Siegfried> Can confirm, currently trying to figure out why mongoose is so weird
[16:48:59] <leolove> Hi. Never tried mongodb myself but have some questions for ops guys. Is it easy to maintain a cluster of mongodb with automatic shard, automatic replication, multi-master, high availability, etc?
[16:58:53] <kurushiyama> StephenLynx You might want to have a look in the channel logs. I am not a JS type, so my explanation why Mongoose is a bad choice to Siegfried might need an improvement.
[16:59:34] <StephenLynx> first: its performance is a joke
[16:59:41] <StephenLynx> it is nearly 6x slower than the native driver
[16:59:46] <kurushiyama> leolove Well, setting up one sure is not. But I make a living from helping people to utilize and run it properly. Go figure. ;)
[16:59:56] <kurushiyama> StephenLynx THAT bad. Whoa...
[17:00:09] <StephenLynx> second: it turns _id into a string instead of using an actual ObjectId type to handle it
[17:00:18] <StephenLynx> third: it doesn't handle dates properly
[17:00:28] <StephenLynx> fourth: it is pointless.
[17:00:39] <kurushiyama> StephenLynx My argument was that it leads one to treat MongoDB as an object store, leading to known problems as overembedding and stuff.
[17:01:02] <StephenLynx> hm, dunno about that.
[17:01:21] <StephenLynx> afaik nothing keeps you from using fake relations with it
[17:01:38] <kurushiyama> Whereas your third point seems like a total killer from the technical side.
[17:02:16] <StephenLynx> everything about mongoose is a killer from a technical point of view.
[17:03:48] <StephenLynx> those guys put some serious effort in making it bad.
[17:10:55] <kurushiyama> StephenLynx Obfuscated ODM contest? With extra points for slowdowns? ;)
[17:15:21] <Cryptite> Hey folks, I have a replica set question
[17:15:39] <Cryptite> I'm connecting to my RS via the java driver; I have two members of my RS right now, both with their hostnames as their public hostname
[17:15:45] <Cryptite> something.hostname.com
[17:16:07] <Cryptite> In said Java application, I'm connecting to the replica set via localhost, but it's eventually resolving to the public hostname connection
[17:16:24] <Cryptite> So I'm not 100% but I think it's going through the public ip, rather than using a localhost connection for all its ops
[17:16:39] <Cryptite> Is that correct, ok? What should I really be doing...?
[17:17:57] <Cryptite> I suppos more specifically I get a message that goes like this: "Canonical address the.address.com:27017 does not match server address. Removing localhost:27017 from client view of cluster"
[17:19:45] <kurushiyama> Cryptite Sounds like a mess.
[17:20:19] <kurushiyama> Cryptite Either access them via public adresses or localhost. Do not mix that. And especially do not mix during replset setup.
[17:21:15] <Cryptite> FWIW pinging both localhost and the public ip get the same timings, so it may not matter
[17:21:39] <kurushiyama> With public adresses meaning "resolvable to an ip adress by all nodes of the replset"
[17:22:05] <Cryptite> well that's the concern, right? I don't think I can add the primary as localhost, otherwise other nodes won't see it, because localhost
[17:22:42] <Cryptite> I think this may all be moot, it may be no consequence that it's resolving to the public ip; it may be in name only, and I may not be suffering any performance
[17:22:45] <kurushiyama> Cryptite https://docs.mongodb.org/manual/tutorial/deploy-replica-set/#connectivity
[17:23:06] <Cryptite> yeah I have all of that setup properly
[17:23:11] <kurushiyama> I doubt that.
[17:23:24] <Cryptite> er
[17:23:38] <kurushiyama> Cryptite If you had, you would not have the described problem. ;)
[17:23:51] <Cryptite> well that's a bit my point, may not be a problem
[17:24:03] <Cryptite> after the ping test, It may be nothing at all
[17:24:20] <Cryptite> may just say the public ip but i'm still getting connections to the db in like 3-6ms so
[17:24:22] <kurushiyama> Cryptite "Removing localhost:27017 from client view of cluster"? I'd call that a problem.
[17:24:44] <Cryptite> fair point
[17:24:46] <Cryptite> well
[17:24:54] <Cryptite> it removes it from the lcient view
[17:25:09] <Cryptite> client* view, but immediately afterwards, it still picks up the same server in the RS list
[17:25:57] <Cryptite> http://pastebin.com/W8LtJBFk is the total log
[17:26:26] <Cryptite> in the end it discovers the primary (which is localhost)
[17:30:12] <kurushiyama> Cryptite That is exactly the point. It is on your local machine, but not localhost.
[17:30:46] <Cryptite> right, I think the lesson learned is connect to the proper set ip as defined in the RS
[17:30:48] <Cryptite> which is the public ip
[17:30:49] <Cryptite> not localhost
[17:31:08] <Cryptite> it figures it out anyway, but i was just trying to be fancy and don't need to be in this case
[17:31:34] <kurushiyama> Cryptite Aye. And here comes the reason: It is a Very Bad Idea™ to run your application on a mongodb node.
[17:32:44] <Cryptite> I haven't seemingly had issues with it thus far, but that may be due to the fact that the bandwidth to it is pretty low
[17:35:25] <kurushiyama> Cryptite MongoDB (depending on the version) will take up to 85% - 90% of the available RAM. running Your application and MongoDB on the same machine will make them battle for resources when you need performance the most: during high load times.
[17:35:46] <kurushiyama> Cryptite Very Bad Idea™
[17:35:58] <Cryptite> huh, the RAM thing is news to me...
[17:36:25] <kurushiyama> Cryptite Well, MongoDB keeps a so called working set in RAM. Simplified, the indices and LRU data.
[17:36:38] <kurushiyama> Sorry Data with LRU eviction
[17:37:41] <kurushiyama> Cryptite Wiredtiger even holds 2 copies of the same data in RAM in case of write operations for a certain amount of time.
[17:38:36] <Cryptite> My mongo instance is currently eating, in total, .9% of my ram
[17:41:13] <Cryptite> but that's good to look out for i guess, i could test during more intense data ops
[17:41:15] <Cryptite> I'm running latest mongo btw
[17:44:00] <kurushiyama> Cryptite With default settings, I assume. So you have a compression on almost every important part, at the expense of CPU which... ...right, the application and mongod would battle for that, too.
[17:44:20] <kurushiyama> Cryptite Well, think of several hundred gigabytes of data.
[17:44:38] <kurushiyama> Cryptite plus according indices.
[17:56:10] <Cryptite> I believe my entire Mongo journal is presently ~2.8gb
[17:56:23] <Cryptite> at least /var/lib/mongodb is anyway
[19:01:43] <mylord> what’s this mean? { lastErrorObject: { updatedExisting: false, n: 0 }, value: null, ok: 1 }
[19:01:55] <mylord> so it didn’t update, but it found 0 records to update?
[19:14:43] <rickardo1> Is is possible to have the document returned as an assoc array in php? Now I got object
[19:30:52] <zsoc> I don't disagree mongoose may or may not be bad, but I don't see how using it would affect the authentication issue i'm having with passport
[20:24:41] <kurushiyama> Cryptite Sorry, was having dinner. The journal has a fixed size, iirc. 5% of the available disk space of the partition, iirc.
[20:25:07] <kurushiyama> edrocks Greetings, gopher!
[20:25:57] <edrocks> kurushiyama: whats up
[20:26:06] <edrocks> updating my mms agents
[20:26:18] <kurushiyama> edrocks Makes sense ;)
[20:26:46] <edrocks> kurushiyama: preparing to update to 3.2.4
[20:27:14] <kurushiyama> edrocks From which version?
[20:27:38] <edrocks> kurushiyama: 3.0.6
[20:27:53] <edrocks> I usually update every few months
[20:28:11] <edrocks> well probably about every 6 months
[20:28:41] <kurushiyama> edrocks Any particular reason to change minor?
[20:29:18] <edrocks> kurushiyama: just to keep updated. don't want to get to far behind
[20:29:54] <edrocks> was gonna switch to wiredtiger but i dont think it's safe to do so yet. plus it's a pain to switch
[20:30:12] <kurushiyama> edrocks I beg to differ
[20:30:25] <edrocks> what do you mean?
[20:31:07] <kurushiyama> edrocks http://dba.stackexchange.com/a/121170/38841
[20:31:15] <kurushiyama> Note the update.
[20:32:27] <kurushiyama> edrocks And if you have a replset, changing the storage engine is straightforward.
[20:32:44] <edrocks> I have a replica set but it's 2 node
[20:33:00] <kurushiyama> edrocks 2 nodes + arb, right?
[20:33:19] <edrocks> kurushiyama: no just 2 nodes and yes I know it's bad
[20:33:45] <kurushiyama> edrocks Bad is not even starting to describe it, especially given the cost of an arbiter.
[20:36:21] <kurushiyama> edrocks But you are right, in that case changing the storage engine will be a pita
[20:36:46] <edrocks> I looked into it a bit and you have to resync everything
[20:37:23] <edrocks> doesn't sound fun and perf isn't an issue right now. I've got 48cores and plenty of ram
[20:37:26] <kurushiyama> edrocks Which should not be much of a problem. But with a 2 member replset, it becomes unavailable, basically.
[20:38:47] <edrocks> kurushiyama: I'm getting a new computer tomorrow so I'll probably through an arbiter on my old dev machine
[20:38:50] <kurushiyama> edrocks Well, the point is that you get better performance for free. You could scale down a bit and add an arbiter ;)
[20:40:30] <edrocks> kurushiyama: did you switch over to wired tiger yet? or just using for new projects?
[20:41:03] <kurushiyama> edrocks I did several migrations to WT, and use it for all new projects and/or deployments.
[20:43:41] <kurushiyama> edrocks Advice: higher clock speeds over many cores.
[20:44:01] <edrocks> kurushiyama: fine I'll upgrade
[20:45:14] <kurushiyama> edrocks What you should consider in case you migrate to WT is wether to have compression or not. Personally, I tend to compress the data files, and leave the indices and journal uncompressed.
[22:55:21] <devdvd> Hi all, I'm setting up my mongo cluster to use ssl for server > server communication. I think I have it setup but I'd like to verify it through mongo that it's using SSL. Is there any way to do that?