PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 14th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:27:51] <_syn> okay, so the difference is mainly that profiling offers more information about the performance of an operation
[00:27:55] <_syn> coolios
[01:31:00] <jeromegn> Anybody has a nice BSON file sample to work with? I'm implementing the BSON spec and I've been having a hard time getting some solid sample data to properly test my implementation. Especially around fields like Code, CodeWithScope, Binary and others like that.
[01:34:00] <StephenLynx> use mongodump
[01:34:07] <StephenLynx> and it will give you raw bson, I guess
[01:35:28] <cheeser> yep
[01:35:37] <jeromegn> yea, but I'm not even sure how to produce a document with something like CodeWithScope
[01:35:40] <jeromegn> or Binary
[01:35:45] <jeromegn> from the mongo shell for instance
[01:41:06] <cheeser> oh. those things.
[01:41:16] <jeromegn> :)
[01:41:22] <cheeser> binary is easy enough, though. BinData() in the shell
[05:03:55] <krethan> exit
[05:27:16] <_syn> hi guys
[05:27:24] <_syn> Some times im finding:
[05:27:24] <_syn> 2015-07-21T08:28:58.931+0000 [conn3905634] command Insight.$cmd command: update
[05:27:25] <_syn> What I expect:
[05:27:25] <_syn> 2015-09-12T04:18:43.437+0000 [conn10162561] update Insight.ActiveUsersByLocationToDate query:
[05:28:06] <_syn> has anyone seen this in logfile where instead of just the command (update in this instance) it adds the database name, appended by .$cmd command: ?
[05:28:39] <_syn> I was wondering why my regex wasnt parsing properly until I took a deeper look into the logfile and found this..
[05:46:03] <_syn> these arehttp://pastebin.com/fpT1DtU
[05:46:26] <_syn> blah
[05:46:35] <_syn> theres a full log, in the pastebin
[05:46:49] <_syn> they are the exact same record too, which is really odd
[07:35:52] <nfroidure> Are you still using the old Node MongoDB native driver or do you already use the 2.x version ? If so, any thing to know beofre switching ?
[09:17:16] <fontanon> Hi everybody. Does anyone recommends a good guide to migrate to wiredTiger?
[09:19:07] <coudenysj> fontanon: http://docs.mongodb.org/master/release-notes/3.0-upgrade/ ?
[09:22:07] <fontanon> coudenysj, thnks
[12:07:28] <amitprakash> Hi, I am getting a SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server on my primary mongo in a replica set for tailed oplog queries
[12:07:34] <amitprakash> What gives
[12:32:07] <mortal1> Gentlemen, I'm trying to query on the properties of a list of objects that are nested a few levels. Could you take a look at http://pastebin.com/ASSgeqdT and tell me how one might go about writing such a query in mongo?
[12:32:42] <StephenLynx> >"_class" : "com.cache.NpaNxxCache",
[12:32:43] <StephenLynx> "item" : {
[12:32:43] <StephenLynx> "_class" : "com.NpaNxxNumberList",
[12:32:46] <StephenLynx> damn son
[12:33:03] <StephenLynx> what I would do is something like this
[12:33:50] <mortal1> StephenLynx: yeah I'm trying to raid my cache to avoid a very expensive api call
[12:34:19] <StephenLynx> find({$and:['item.npaNxxNumbers.npa':value],['item.npaNxxNumbers.nxx':othervalue]})
[12:34:50] <mortal1> oh snap, you can just drill down like that?
[12:34:55] <StephenLynx> aye
[12:35:09] <StephenLynx> what I wrote MIGHT be incorrect because of the nested sub-array
[12:35:17] <StephenLynx> having such a complex document usually is not worth it.
[12:35:58] <StephenLynx> but the point is that you can just use dot notation to query on nested elements
[12:51:08] <mortal1> So, I've updated my paste with the query I'm running. It seems to return everything... http://pastebin.com/4BqWudhE
[12:51:31] <mortal1> Which is odd if there's no match I'd expect nothing
[12:51:52] <mortal1> (by everything I mean the whole collection, unfiltered)
[12:52:09] <StephenLynx> hm
[12:52:16] <StephenLynx> it probably its because its an array
[12:52:26] <StephenLynx> check out array query operators.
[12:52:32] <StephenLynx> I think you need to use elemMatch
[12:53:26] <mortal1> I'll give it a shot thanks
[13:03:16] <mortal1> db.getCollection('npaNxxFromVonageCache').find({$and: [
[13:03:16] <mortal1> {'item.npaNxxNumbers': {$elemMatch: {npa : '978'}}}
[13:03:18] <mortal1> , {'item.npaNxxNumbers': {$elemMatch: {nxx : '274'}}} ]});
[13:03:36] <mortal1> ^ Tried that, still got the whole collection
[13:04:06] <StephenLynx> no
[13:04:14] <StephenLynx> you would have to use elemMatch outside that.
[13:04:28] <StephenLynx> $elemMatch:{$and:[blabla]}
[13:04:30] <StephenLynx> I guess
[13:04:39] <mortal1> oh
[13:04:50] <mortal1> interesting
[13:05:11] <StephenLynx> the funny thing is it returning everything when you have a clear condition that must return true
[13:07:02] <mortal1> Well, ty for your help so far. I gota drive in to work. I'll bbl.
[13:07:12] <StephenLynx> k
[13:39:05] <beekin> Is it true that MongoDB is really only meant for unstructured data?
[13:39:20] <Zelest> define unstructured.
[13:40:04] <beekin> Data that is related but might vary with what fields...like medical records?
[13:40:16] <beekin> To me, that seems like the "document model"
[13:40:39] <Zelest> MongoDB stores documents yes.
[13:40:52] <beekin> Whereas, a CMS can be made to fit the document model but it seems more logical to fit it to a relational model.
[13:41:24] <cheeser> not at all. CMSes are a great fit for mongodb
[13:42:00] <beekin> Because we can normalize data in a way that works?
[13:42:39] <StephenLynx> >really only meant for unstructured data
[13:42:39] <StephenLynx> no
[13:43:00] <StephenLynx> it doesn't work too great for data that is extremely relational.
[13:43:12] <StephenLynx> but there are no issues in using it for structured data.
[13:43:21] <StephenLynx> from its weakness, this is not one of them.
[13:47:14] <beekin> That makes sense.
[13:47:32] <beekin> If I required complex transactions...then a RDBMS would make sense.
[13:48:21] <StephenLynx> that too.
[13:48:25] <beekin> I'm sure this question is posed all the time. Thanks for entertaining me.
[13:48:30] <StephenLynx> np
[14:12:56] <nekyian> hey guys... I have a very large Mongo collection and I want to do something for every doc in that collection. Do I have an alternative to Document::all() to get documents in batches?
[14:13:24] <nekyian> I tried using skip()->take() but it misses documents
[14:16:59] <cheeser> what are you trying to do?
[14:42:57] <saml> are hidden replicaset members supposed to lag behind?
[14:43:18] <saml> oh it just catched up
[14:47:02] <StephenLynx> nekyian you can get a cursor and use next until you run all documents.
[14:47:38] <StephenLynx> or fetch one at a time, when you fetch X or fetch them all you perform the operation on them.
[14:47:40] <StephenLynx> repeat
[14:48:00] <StephenLynx> or if you don't need to read the document to perform the operation, just run the update with an empty query block
[15:18:43] <Doyle> Hey. Are there any issues that may arise from changing the hostname of a mongodb RS host at the OS level, but not in the config. The config is correct. Just the os hostname is bad.
[15:19:32] <ehershey> I have seen that scenario cause problems
[15:20:27] <Doyle> What kind of problems?
[15:22:17] <ehershey> instances being confused about their identity
[15:22:35] <ehershey> and tests failing
[15:23:20] <Doyle> Well, centos doesn't handle multiple domains being specified in a DHCP option set, it concatinates them so you end up with something like hostname.ec2.internal.otherdomain.com
[15:23:25] <Doyle> it's so trash
[15:23:51] <Doyle> The instances are resolvable by hostname.ec2.internal, but not the concatinated one
[15:24:21] <Doyle> The replica set was setup with just the hostname:27017 in the conf and is working
[15:24:48] <Doyle> just the hostname is resolvable, and will continue to be post rename. The rename is just from the hostname.ec2.internal.otherdomain.com to hostname.ec2.internal
[15:34:11] <amitprakash> Hi, I am getting a SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server on my primary mongo in a replica set for tailed oplog queries
[15:34:13] <amitprakash> What gives?
[15:54:25] <mortal1> Well, still at it. Here's my latest http://pastebin.com/DzzEjPRz
[15:56:16] <mortal1> no doubt there's something I'm doing wrong with my elemMatch, but mongo doesn't give me any errors
[15:57:26] <mortal1> err.. see this one instead http://pastebin.com/ssCbXkRj
[15:57:58] <StephenLynx> that won't work
[15:58:01] <StephenLynx> at all
[15:58:19] <mortal1> bbiab (sry)
[15:58:23] <StephenLynx> you are just telling to look at the fields npa and nxx, and the document don't have them
[15:58:42] <StephenLynx> the document have an array of objects that have these fields.
[15:59:09] <StephenLynx> ah, hold on
[15:59:53] <StephenLynx> nvm, I didn't notice you put "'item.npaNxxNumbers'" before
[17:22:02] <mortal1> StephenLynx: sorry, didn't mean to blow you off, co-workers dragged me off to lunch ;^)
[17:22:44] <StephenLynx> np
[17:56:05] <akoustik> i'm trying to set up a "cloud manager automation" trial. i have 4 replica set members and they were all recognized during setup, but at the step to actually add automation, i can't go on, and i get a page with "Error: Version not found". anyone seen this before?
[17:57:33] <cheeser> you should file a support ticket.
[17:57:34] <akoustik> http://pasteboard.co/DFvymh8.png
[17:58:13] <akoustik> very well! such a non-descript error message though.
[17:59:29] <cheeser> i think i have an idea where that error comes from but it's not really my area so I don't know for sure.
[18:01:34] <akoustik> hm. well, in related news, the reason i'm messing with the web manager is to figure out why my secondaries have been in "recovering" status for several days with no evidence that anything is actually going on.
[18:02:11] <akoustik> is it advisable to just unhook them, blow their db directories away and start over?
[18:02:57] <cheeser> http://docs.mongodb.org/master/tutorial/resync-replica-set-member/
[18:03:06] <cheeser> read over that and see if it applies
[18:03:19] <akoustik> thanks
[18:52:01] <mkjgore> hey folks, just curious, is there a way to query a running db to see what its "directoryPerDB" setting is set to?
[19:02:08] <preyalone> help, i'm getting TypeError: update() got multiple values for argument 'upsert' https://www.irccloud.com/pastebin/2B59SBnJ/
[19:05:04] <cheeser> what's with that ** ?
[19:05:06] <akoustik> i assume that you're not actually using that syntax in your code...
[19:09:47] <preyalone> ** expands dictionaries as if you typed out the keyword parameters by hand. I've also tried literally typing them out by hand, but same error
[19:10:02] <preyalone> (this be python)
[19:13:56] <akoustik> oooh hah
[19:16:16] <preyalone> i'm using pymongo 2.7
[19:16:56] <preyalone> and mongoengine 0.8.8
[19:17:34] <akoustik> yeah sorry i forgot about dict expansion. so you tried it by hand... does pymongo maybe actually expect the 'upsert' arg to be in an unexpanded dict? (haven't used pymongo)
[19:19:04] <preyalone> right. i've already tried that.
[19:20:33] <akoustik> :\
[19:21:12] <silasx> So, logging question: I’ve set up a mongo v3 server, and it’s logging the full content of writes. Like, when I insert an object, it’s logging every attribute of that object in /var/log/mongodb/mongodb.log . Now, I don’t mind it logging the write event, but I really don’t want it to log all the content. ID only is ideal.
[19:22:22] <silasx> The closest I can find to addressing this in the docs is that I can set the log level to “only stuff higher than info” … but again, what if I don’t mind “wrote object with this ID on this DB”, but just don’t want it dumping the full content to my logs?
[19:23:07] <silasx> And as I’m trying to debug this, I want to know how to ask mongo what my current log settings are, but the most it will do is repeat by my explicit config settings
[19:26:18] <silasx> So, a) should mongodb be logging the full content of an insertion? b) how do look up the current log settings? and c) is it possible to turn off “log full content of insertion”?
[19:27:08] <silasx> and d) can anyone see my questions?
[19:29:10] <StephenLynx> a: that would include the inserted document? in that case, no, I don't think thats a good thing
[19:29:13] <StephenLynx> b: dunno
[19:29:16] <StephenLynx> c: dunno
[19:29:19] <StephenLynx> d: no
[19:29:20] <StephenLynx> :v
[19:29:21] <StephenLynx> v:
[19:29:23] <cheeser> b) db.getLogComponents()
[19:30:35] <silasx> @cheeser thanks!
[19:33:48] <silasx> Okay so I have query.verbosity (and a bunch of others)set to -1 … is that ideal?
[19:33:50] <silasx> brb afk
[19:47:07] <nathesh> Hi
[19:47:57] <nathesh> So I had a question on pymongo, I am inserting to a mongodb but one find_one() it works and on the other it fails
[19:48:14] <nathesh> I have no idea why it works on one but fails on the other
[19:48:23] <nathesh> the data is the same
[19:48:29] <nathesh> fail meaning response is null
[19:50:57] <nathesh> can someone help me with this
[20:10:53] <daidoji> nathesh: example?
[20:11:03] <daidoji> your explanation probably isn't going to cut it
[20:11:10] <nathesh> https://bpaste.net/show/67edbdadce74
[20:12:17] <daidoji> nathesh: this is just data
[20:12:24] <daidoji> I don't see your queries
[20:12:36] <nathesh> one second
[20:16:08] <nathesh> https://bpaste.net/show/3dfe3b8a04ef
[20:18:34] <daidoji> nathesh: so what is your question? Both those queries look like they succeed
[20:18:46] <nathesh> but the second one doesn't
[20:18:58] <daidoji> how does it not?
[20:19:04] <nathesh> the second returns a none
[20:20:31] <nathesh> which I do not understand
[20:20:34] <daidoji> oh wait I see
[20:20:37] <daidoji> hmmm thats strange
[20:21:50] <nathesh> yeah I can't figure out why
[20:22:51] <nathesh> This is the data in mongo for the collection https://bpaste.net/show/ccf29eb23faf
[20:25:32] <daidoji> nathesh: man thats a weird one
[20:25:37] <daidoji> try unicode keys maybe?
[20:25:59] <daidoji> other than that, I'd guess there's some kind of weird type conversion stuff going on
[20:26:13] <daidoji> what happens if you type the arrays in those final dicts?
[20:26:24] <daidoji> oh wait! there it is
[20:26:36] <nathesh> yeah
[20:26:37] <nathesh> https://bpaste.net/show/c3e8b51099ab
[20:26:38] <daidoji> first query you're using unicode keys, second query you're using python str ()
[20:26:42] <nathesh> this is the data
[20:27:04] <nathesh> yeah I changed
[20:27:14] <nathesh> wait
[20:27:17] <daidoji> and unicode keys didn't work?
[20:27:45] <nathesh> no the keys are string
[20:27:57] <nathesh> they have to be
[20:28:08] <daidoji> oh wait your'e right
[20:28:11] <daidoji> sorry
[20:29:21] <nathesh> yeah that is confusing me
[20:29:31] <nathesh> been stuck on it for more than hour now
[20:29:54] <daidoji> yeah it seems weird, maybe someone more knowledgeable than I can help
[20:30:18] <nathesh> do you know who I can ask?/
[20:30:40] <daidoji> umm a lot of people idle, not sure whom specifically
[20:30:48] <daidoji> have you tried your query key:value pairs one by one
[20:30:51] <daidoji> to see where it breaks?
[20:31:23] <nathesh> what do you mean?
[20:32:38] <daidoji> well like db_mongo.discover.find_one({elig_ids: [1541, 1921]})
[20:32:59] <daidoji> then db_mongo.discover.find_one({question_id: 641})
[20:33:02] <daidoji> etc....
[20:33:10] <daidoji> eventually one of those will break and you'll know where the issue is
[20:33:20] <daidoji> why it breaks, who knows
[20:33:29] <silasx> Hey amigos, I’m back. So, I can do .getLogComponents, and I see verbosity of everyhing set to -1 … should it be like that? is that the reason why I’m seeing full documents in the logs when they get inserted?
[20:33:30] <nathesh> ah good idea thanks!
[20:33:58] <daidoji> nathesh: good luck
[20:34:05] <daidoji> let me know what you find out, as its a curious example
[20:34:54] <qswz> Hi, I've installed mongodb and mongoose, when I connect to it with "mongodb://localhost/" it works, but not when I put "mongodb://<IP>", any idea why?
[20:40:05] <StephenLynx> I suggest you don't use mongoose
[20:40:24] <nathesh> daidoji: tax_ids is breaking
[20:40:34] <nathesh> now time to figure out why :)
[20:41:02] <nathesh> wait does the array ordering matter?
[20:47:05] <qswz> StephenLynx: ok will test with the native driver
[20:50:36] <silasx> Hey mongo experts, when I look at getLogComponents, I see verbosity set to 0, and but verbosity of every component is set to -1. Is that how it’s supposed to work?
[20:51:04] <silasx> It seems like with our v3 upgrade, by default it started dumping entire documents into the log that were being written, which doesn’t make sense as that’s a security risk.
[20:52:07] <silasx> I don’t want to be all “hatin’” on Mongo but … is that really what’s supposed to happen on an upgrade?
[20:54:04] <cheeser> you should probably file a support ticket
[20:54:14] <mkjgore> hey mighty channel, is there a way to convert a non-"directoryPerDB" backup converted to a "directoryPerDB" format? We've got ourselves mongodb cloud backup but it seems to be converting our DB from one format to anotyher
[20:54:17] <mkjgore> :-/
[20:54:39] <cheeser> mkjgore: iirc, you'd have to recreate your DB to change that setting.
[20:56:05] <mkjgore> cheeser: thanks for that. the thing is that the db was "directoryPerDB" and then the restore from Mongo's cloud service gave us a bunch of *.0 and *.ns files instead of our previoous setup
[20:56:33] <mkjgore> so our server is already set (and has been) to directoryPerDB, the files we got back from mongo however… :-(
[20:57:13] <cheeser> backup files from the cloud backup? or mongodump?
[20:57:21] <mkjgore> from the cloud backup
[20:57:28] <cheeser> i don't believe the backup stores that kind of metadata.
[20:57:47] <cheeser> is your local server configured for directoryPerDB?
[20:58:11] <mkjgore> yep, local box (the one being backed up) uses (set to true) directoryPerDB
[20:58:47] <mkjgore> but when I've pulled down the latest snapshot (within the dl's tar gz file) it's just a pile of *.0 and *.ns files
[20:59:24] <cheeser> inside the archive it looks liek that? or after you've restored?
[20:59:33] <mkjgore> inside the archive
[20:59:43] <cheeser> ah. well, do the restore and see.
[21:00:14] <mkjgore> can't really restore because the DB throws an error complaining about directoryPerDB being set to true but the database being otherwise
[21:01:21] <cheeser> oh. interesting. that also sounds like a support ticket. i'm not sure of the mechanics around that.
[21:01:57] <cheeser> i need to go find food. good luck. :)
[21:02:04] <mkjgore> thx cheeser!
[21:08:36] <d-snp> has anyone seen this before? http://lpaste.net/140994
[21:08:45] <d-snp> we get an error "splitChunk cannot find chunk"
[21:08:57] <d-snp> and it's trying to split a chunk into lots and lots of chunks for some reason
[21:09:40] <silasx> So can anyone answer: if my default log verbosity level is 0, why are all the components at verbosity level -1?
[21:10:50] <qswz> StephenLynx: still the same, when I do MongoClient.connect('mongodb://127.0.0.1/', function (err, db) {... it works, but not when I put the public IP, the firewall isn't active but even when trying locally it shouldn't matter
[21:14:32] <qswz> in command line, when I do "mongo" or "mongo localhost" it works
[21:14:54] <qswz> when I do "mongo my.fucking.public.ip" it doesn't
[21:15:10] <daidoji> nathesh: ahh good catch. I'm betting in javascript [1,2] is different than [2,1] which is why it matters which makes sense if you think of them as arrays (as javascript is want to do) instead of lists where ordering is arbitrary
[21:15:11] <qswz> how is tat possible?
[21:15:36] <nathesh> yeah prolly
[21:16:25] <daidoji> qswz: firewall
[21:17:04] <daidoji> although experience and caution advise me to suggest that its not usually a good idea to open up a DB port on an externally facing ip
[21:17:59] <qswz> yes, it's just for testing from my local laptop (to an external host)
[21:20:53] <qswz> ~$ sudo ufw status => Status: inactive
[21:27:09] <qswz> oh http://stackoverflow.com/a/26022422/3183756
[23:10:37] <Astral303> does anyone know what operations still acquire global locks?
[23:11:16] <Astral303> trying to diagnose what cause a long global lock hold (3-5 seconds) while under replication and mostly querying load
[23:11:33] <Astral303> this is on a shard replica running 3.0.6