PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 17th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:20:18] <KekSi> does anyone know how to limit the logspam in a cluster? the --quiet option doesn't seem enough
[06:20:37] <KekSi> query routers and config servers each write logs every 10s
[06:23:11] <KekSi> it looks to me like debug output and not necessarily useful for production (on query routers: balancer round started, ended and did nothing, LockPinger being successful)
[06:23:36] <KekSi> and on config servers a lot of this: 2015-09-17T06:17:26.010+0000 I STORAGE [conn3] CMD fsync: sync:1 lock:0
[08:03:26] <mitereiter> hello
[08:04:21] <mitereiter> is it just by me or db.runCommand({convertToCapped: "cappedtest", size: 1000, max: 3}) inst working?
[08:05:09] <mitereiter> after the command the collection in collection stats the size and max parameters didnt take up the given values
[08:06:01] <mitereiter> http://pastebin.com/XGWYeBNz
[09:56:12] <vagelis> Question about hitting index.
[09:58:10] <vagelis> Lets say i have index: A.B
[09:58:53] <vagelis> If i make query A.B : 123 will it hit the index or i should always use $elemMatch whenever i have dot indexes?
[10:00:20] <vagelis> {'A': { '$eleMatch': { 'B': 123 } } }
[10:01:49] <mitereiter> is it just by me or db.runCommand({convertToCapped: "cappedtest", size: 1000, max: 3}) inst working?
[10:01:49] <mitereiter> after the command the collection in collection stats the size and max parameters didnt take up the given values
[10:01:59] <mitereiter> http://pastebin.com/XGWYeBNz
[10:24:43] <joannac> mitereiter: have you read the docs? http://docs.mongodb.org/manual/core/capped-collections/
[10:24:57] <joannac> "If the size field is less than or equal to 4096, then the collection will have a cap of 4096 bytes. Otherwise, MongoDB will raise the provided size to make it an integer multiple of 256."
[10:26:34] <joannac> vagelis: $elemMatch will let you project out a specific array entry. both should use the index though
[10:27:50] <vagelis> joannac well im reading in stackoverflow how u might not hit the index and im trying to filter queries in python that dont have an index and im kinda confused with this elemMatch
[10:44:56] <mitereiter> sorry joannac, I did view the doc page, but i searched for a syntax error not for a limitation
[10:47:12] <mitereiter> wait i viewed the convertToCapped doc page, it didn't mentioned that there
[10:48:48] <mitereiter> is it posible that I cant pass a max parameter by converting an existent collection to capped collection?
[10:49:16] <mitereiter> the doc doesnt mention the parameter, but the shell accepted the command
[10:49:33] <mitereiter> and did not applied the max parameter
[10:57:34] <vagelis> I query an array which has this form: MyArray: [1,2,3..] and i query like that: { MyArray: 2 }...... Shouldnt in explain says that it was multikey? Why it says false?
[11:03:37] <mitereiter> do you already have an array in that field?
[11:04:07] <vagelis> The field MyArray is an array, should i have INSIDE MyArray an array? :S
[11:05:00] <mitereiter> does the query use the wanted index?
[11:05:12] <vagelis> yes
[11:05:42] <vagelis> it uses MyArray but in isMultikey shows false :/
[11:06:20] <mitereiter> thats odd
[11:06:36] <mitereiter> ill do a quick test
[11:08:20] <vagelis> I tried everything it never says multikey true, i tried:
[11:09:37] <mitereiter> hm, i have version 3.0.6 and it works perfectly for me
[11:09:41] <vagelis> MyArray: 2......MyArray: [2].....MyArray: {'$elemMatch': {'$eq': 2} }...........MyArray: {'$elemMatch': {'$eq': [2]} }
[11:14:13] <mitereiter> kann you pastebin the explain result?
[11:14:21] <mitereiter> *can
[11:23:40] <vagelis> sry i cant :| but thanks for ur help
[11:23:51] <vagelis> ill dig more because its kinda strange
[12:00:42] <ronr> hi, I'm trying to setup a mongo shard cluster (mongo 3) and everything went well until I tried starting mongos for the first time. It keeps failing on getting a distributed lock for upgrading the database: 2015-09-17T13:49:03.042+0200 D SHARDING [mongosMain] trying to acquire new distributed lock for configUpgrade on XXXX:27019,XXXX:27019,XXXX:27019 ( lock timeout : 900000, ping interval : 30000, process : XXXX:27017:1442490538:180428
[12:01:03] <ronr> I don't find any errors in the logs of the config servers, any ideas where to look?
[12:03:08] <ronr> so mongos does run, but I can't connect to it
[12:03:29] <ronr> I do see messages that LockPinger succeeds
[12:03:49] <ronr> but also these (without -v) waited 33s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken
[12:11:36] <ronr> I just noticed all config servers claim they're busy upgrading, however they've be doing so for nearly three hours now, on a new empty installation so that can't be right
[12:15:39] <ronr> removing all the data dirs of the config servers and letting them rebuild themselves seemed to do the trick
[12:30:04] <ronr> next issue: if I try to add a replica set as a shard (sh.addShard("rs1/<host>:27017")), it says my host does not belong to replica set rs1, but if I add the host (sh.addShard("<host>:27017")) it says host is part of set rs1, use replica set url format???
[12:36:12] <vagelis> mitereiter: i think i found why its not a multikey. I think that in entire collection has one element in the array. I just inserted one document and put 2 elements in this array made a find and showed finnaly isMultikey:true.
[12:36:32] <vagelis> If this is right then multikey becomes when len/length > 1, correct?
[12:49:18] <deathanchor> ronr: did you add the whole set?
[12:50:12] <vagelis> metereiter actually no, If i search an array that has max 1 element and i use $eq it doesnt work but if i search with $in operator then is a multikey
[12:52:13] <ronr> and just jumping from issue to issue here (is that tutorial really complete?), I managed to get to the next step of my addShard command. But my shard obviously has authentication enabled and therefor mongos can't add it and the addShard function has no means of passing in a username and password? How do I do this?
[12:52:17] <vagelis> Jesus now worked with MyArray: 123 i dont understand whats going on
[13:32:16] <nikitosiusis> good day
[13:33:07] <nikitosiusis> I'm trying to import my mongodb bson dump to mongo 3.0.6 via mongos. I set up 256M chunksize, but seems this setting doesn't work for me - chunks don't grow above 64M and new chunks are being created
[13:33:10] <nikitosiusis> http://paste.ubuntu.com/12438040/
[13:33:48] <nikitosiusis> my dump starts to import quickly, but after some gigabytes it starts to provide big io and almost hangs
[13:34:09] <nikitosiusis> do you have any ideas why chunksize doesn't work?
[14:17:14] <bdbaddog> How would I go about using the result from an aggregate in a find ?
[14:17:33] <bdbaddog> from the mongo shell that is
[14:17:56] <deathanchor> bdbaddog: aggregate and find are two different things
[14:18:44] <deathanchor> find returns a cursor, aggregate returns a single document containing an array with results.
[14:19:11] <StephenLynx> aggregate returns an array but you can also get a cursor for it.
[14:19:25] <deathanchor> with 3.0+ yes
[14:19:27] <bdbaddog> I'm a bit of a n00b. can I store the results from the aggregate in a javascript variable in the shell and use that ina find?
[14:19:28] <StephenLynx> ah
[14:19:44] <StephenLynx> its just another variable as any other.
[14:19:53] <StephenLynx> theres nothing special to it.
[14:20:20] <cheeser> 2.6 added cursors to agg output
[14:20:31] <deathanchor> ah
[14:21:08] <pokEarl> If I want to see what requests etc is made in real time I need to use mongosniff, is that right? D:
[14:22:00] <bdbaddog> var xyz=db.mine.aggregate(..)
[14:22:10] <bdbaddog> Would that be right?
[14:22:24] <StephenLynx> depends, what driver are you using?
[14:24:09] <bdbaddog> mongo shell
[14:24:31] <StephenLynx> then no, I don't think so
[14:24:45] <StephenLynx> db.mine.aggregate(...) would work though
[14:28:23] <bdbaddog> StephenLynx: How do I store the results to use in a find statemnet?
[14:28:37] <StephenLynx> in the terminal client? I don't know.
[14:28:57] <bdbaddog> so probly a python (or other) script is the way to go.
[14:29:09] <StephenLynx> depends on what you want to do.
[14:29:46] <bdbaddog> I'm trying to prune some leaf nodes (effectively)
[14:30:17] <deathanchor> bdbaddog: yes you can do var result = db.mine.aggregate(..).result to store the result array
[14:30:28] <bdbaddog> So I create a set of all referenced _id's between one collection and another, and then want to delete all documents in a second collection which aren't referred to.
[14:31:13] <bdbaddog> deathanchor: can I print out the contents of the stored result in the shell?
[14:31:31] <deathanchor> yes just by typing out the variable name
[14:32:47] <bdbaddog> I get nothing..
[14:32:48] <bdbaddog> > var xyz=db.albums.aggregate({$unwind:"$images"},{$group:{_id:null, photos:{$addToSet:"$images"}}}).result
[14:32:48] <bdbaddog> > xyz
[14:32:48] <bdbaddog> >
[14:33:15] <deathanchor> sorry.. ).results
[14:33:53] <deathanchor> wait is is result
[14:34:05] <deathanchor> I use this all the time
[14:34:19] <deathanchor> just store it without the .result part and print it
[14:35:57] <bdbaddog> ugh.. so dumb. I exited the shell and forgot to set the db.. it's working without the .results.
[14:36:02] <bdbaddog> sorry to waste ur time..
[15:22:25] <elux> id rather not write one by hand from the docs .. id like to just tweak one
[15:24:59] <kaseano> Hi, I need to use the id of a doc as a FK. I know every doc has _id, using that is a little inconvenient sometimes though, ex I have to convert the string to an "ObjectID" from node, plus passing a small int to the app would have a smaller overhead, vs the long _id string.
[15:25:40] <kaseano> but if I created another ID on top of it, suddenly the BSON would be bigger, plus I'd have to index that new property too
[15:26:15] <cheeser> use ints for your _id
[15:27:33] <kaseano> oh I can change that??
[15:27:52] <kaseano> my mind is blown lol thanks cheeser I'll look that up now!
[15:27:56] <Derick> the long _id string isn't actually that long. It's 12 bytes instead of 8 bytes for an int though :P
[15:28:12] <Derick> kaseano: you can use whatever (scalar) value you want as _id value
[15:28:20] <kaseano> maybe I'm overreacting to that string size
[16:03:38] <harttho> Anyone know if the mongodb module has known issues with Node4? I get the following weird error when trying to use the module:
[16:03:41] <harttho> https://www.irccloud.com/pastebin/Jh2wWFka/
[17:35:40] <amcsi_work> hi
[17:35:54] <amcsi_work> how do I compare two numbers of different types (NumberLong, NumberShort)?
[18:08:51] <saml> $gt $lt do not work?
[18:09:33] <saml> what is NumberShort?
[18:10:52] <saml> NumberInt
[19:07:35] <stickperson> how can i do a query to see if something exists and is not an empty string?
[19:17:21] <StephenLynx> $exists
[19:17:31] <StephenLynx> empty strings count as something existing
[19:18:32] <BadApe> seems i was wasting my time with orientdb, think i will try mongodb
[19:18:54] <cheeser> StephenLynx: as do nulls, iirc
[19:19:09] <StephenLynx> really?
[19:19:28] <StephenLynx> I thought nulls were just not saved.
[19:19:36] <StephenLynx> or is that undefined?
[19:19:52] <cheeser> nulls are saved
[19:20:39] <cheeser> yep. and nulls are included with $exists
[19:24:04] <BadApe> is there a repo for centos 7?
[19:24:19] <cheeser> https://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat/
[19:24:50] <BadApe> ty
[19:26:43] <cheeser> np
[19:26:53] <StephenLynx> and it works great, btw.
[19:26:58] <StephenLynx> never had any issues.
[19:28:25] <BadApe> i dunno if this is true, but i was told if your db is better than 32gb mongo smashes your data?
[19:29:12] <StephenLynx> not true
[19:29:19] <BadApe> didn't think it was
[19:29:50] <BadApe> just there is so much FUD going around
[19:29:55] <StephenLynx> yeah
[19:30:03] <StephenLynx> there was lots of hype and then lots of anti-hype
[19:30:56] <cheeser> that's never been true BadApe
[19:30:59] <BadApe> i have started playing with the vert.x project, they seem to support mongodb and redis
[19:31:55] <BadApe> hmm, what to do about selinux
[19:35:44] <BadApe> oh didn't have to do anything
[19:36:01] <StephenLynx> yeah, I noticed that too
[19:36:08] <StephenLynx> how in later versions I didn't had to deal with it.
[19:36:17] <StephenLynx> don't know if centOS changed or mongo
[19:36:35] <BadApe> well i just opened the port and it worked
[19:36:42] <BadApe> super! back to vertx
[19:36:46] <BadApe> thanks guys
[19:46:26] <BadApe> stupid newbie question, can a document have a schema?
[19:54:27] <StephenLynx> what do you mean by schema?
[19:55:10] <StephenLynx> you sure can follow a strict model for collections on your application, but mongo will NEVER enforce said schema.
[19:56:45] <cheeser> i wouldn't say *never*
[19:57:15] <StephenLynx> really?
[19:57:32] <StephenLynx> what does mongo offers for that?
[20:01:03] <cheeser> https://jira.mongodb.org/browse/SERVER-18227
[20:02:19] <StephenLynx> ah
[20:02:33] <StephenLynx> interesting.
[20:02:48] <cheeser> yeah. opens up some interesting options.
[20:05:24] <deathanchor> so... it's only enforced on writes. I hope there will be a way to find all docs which are "not validated"
[20:06:38] <cheeser> i believe there is a way, yes.
[20:07:03] <cheeser> especially since most data will be extant prior to the validation
[20:07:48] <StephenLynx> I don't I will use it though.
[20:08:41] <StephenLynx> I mean, I can understanding it for certain scenarios
[20:09:10] <StephenLynx> like, you have a production server and you don't want to take chances with some data being wrecked by a botched patch
[20:09:40] <cheeser> oh, i think it will.
[20:09:57] <cheeser> i'm already envisioning how to generate those definitions from morphia. :)
[20:10:12] <StephenLynx> or even on a development environment to catch these errors without adding the overhead to the production environment
[20:10:17] <StephenLynx> but personally I won't use them.
[20:11:23] <BadApe> orientdb i liked because it had options to have strict schema, partial and schemaless
[20:11:53] <StephenLynx> I never cared about being schemaless on mongo.
[20:12:01] <StephenLynx> I was just "oh, ok, one less thing to bother"
[20:12:50] <BadApe> i liked hybrid
[20:13:28] <StephenLynx> I dunno, schemas never did anything I couldn't do myself already. I don't hold any love for them. its just one more thing to adjust when I change the model.
[20:14:08] <BadApe> ok teach me something i just did if (document.containsKey("name") && document.containsKey("MaxCharacters")) {
[20:14:15] <BadApe> this isn't going to scale
[20:14:31] <StephenLynx> what driver?
[20:14:38] <BadApe> vertx
[20:14:42] <StephenLynx> no idea.
[20:14:45] <StephenLynx> what is that?
[20:14:47] <BadApe> it uses the java api
[20:14:59] <StephenLynx> I thing cheeser is the java driver developer
[20:15:16] <BadApe> i am just see what the graph says about the driver
[20:15:35] <BadApe> oh they made their own it seems
[20:16:09] <moqca> So I'm just discovering MongoDB and I'm trying to port a scheduling sql db I have, that has an employee Table (employee info), dates table (dates prepopulated from now to 10 years, identifying weekdays and weeknums), and scheduleDate table, which takes from employees and dates when an employee is scheduled. Trying to wrap my head around mongodb, how can I do this here?
[20:16:56] <StephenLynx> why do you need to
[20:17:01] <StephenLynx> pre calculate dates?
[20:17:07] <StephenLynx> I wouldn't do such thing in the first place.
[20:17:27] <moqca> To have holidays, weekends, weekdays and weeknums easily searchable
[20:17:57] <StephenLynx> still doesn't sound any better than to check them at runtime
[20:18:07] <StephenLynx> I don't think it takes longer than fetching from a database
[20:18:11] <cheeser> agreed
[20:19:58] <moqca> I don't know, i just find it easier to say, find all the times someone has worked weekends for the past 3 months.
[20:20:15] <moqca> But I can do without
[20:23:13] <moqca> What I'm trying to understand is, how would I do the design in mongo, would the schedule someone is working for this week be an embeddedDocument?
[20:23:35] <moqca> And then people work with team leads that work with supervisors
[20:24:17] <moqca> If I wanted to take all leads and workers that belong to X supervisor, how would that be done in a document?
[20:24:26] <BadApe> moqca, very specific question :)
[20:26:15] <moqca> Yeah, I dont get it. I feel so dumb, haha
[20:27:22] <StephenLynx> ok
[20:27:28] <StephenLynx> I would duplicate some data.
[20:27:32] <StephenLynx> and tbh
[20:27:41] <StephenLynx> I would use a RDB for that scenario.
[20:27:45] <StephenLynx> it seems very relational.
[20:30:40] <moqca> Alright, thanks man
[20:33:50] <cheeser> depends on the query needs, really. it'd be simple enough to query the "worked" collection with the user id.
[20:34:03] <cheeser> presumably you'd already have that in hand anyway.
[21:48:05] <BadApe> i have a number of document that have a configuration type, i would like to reference another document
[21:48:40] <BadApe> yes i know it is a relationship, but i think i need a little bit of structure between documents
[22:07:27] <Fetch> I have a...mongodb dba...who designed an architecture for an app using 3 config instances cohosted with 3 mongos instances (3 datanode mongod replicaset is on a separate set of instances)
[22:07:47] <Fetch> Is that pretty typical with Ops Manager? It doesn't seem to do that configuration for a cluster by default
[22:08:31] <joannac> Fetch: are you testing Ops Manager?
[22:08:43] <joannac> or do you mean Cloud Manager?
[22:09:42] <Fetch> MMS Ops Manager
[22:10:06] <Fetch> joannac: you would not believe the lengths I've gone through to automate installing this beast :\
[22:10:23] <joannac> Fetch: got a subscription?
[22:10:33] <Fetch> yeah, wanna see my jira tix? :)
[22:10:42] <joannac> sure
[22:10:52] <joannac> you should be asking these questions there
[22:11:11] <Fetch> https://jira.mongodb.org/browse/CS-24186 was one that got closed
[22:11:24] <Fetch> joannac: "Is my mongodb dba an idiot?" is not something I want to commit to Jira
[22:11:34] <Fetch> also, I'm a consultant. So I really don't want to.
[22:12:35] <joannac> Fetch: if you open a ticket with your question I'm more than happy to answer on the ticket
[22:12:52] <joannac> Flick me the number once it's created and I'll grab it
[22:14:41] <joannac> and yes, the question should be "Can I get this configuration from Ops Manager" and not "is my dba an idiot" ;)
[22:16:05] <Fetch> joannac: https://jira.mongodb.org/browse/CS-24333
[22:16:21] <Fetch> I'm pretty sure I CAN get it from Ops Manager, it's more of "should I?"
[22:16:49] <Fetch> (I doubt the dba, who's used ops manager a bit, would give me an architecture that was unimplementable)
[22:17:14] <Fetch> ((however he's still using his Oracle DBA filesystem paths, so...))
[22:22:15] <joannac> Fetch: ah, that's a different question to what I thought
[22:22:59] <joannac> Fetch: anyways, thanks for opening a ticket, we'll respond there and continue working with you on the ticket
[22:23:54] <Fetch> :\
[22:24:14] <Fetch> I know it's your support interface, I really was looking for a quick discussion on IRC ;)