PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 9th of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:20:49] <symbol> When I call db.collection.find() and am iterating over a cursor...where are the documents stored that I haven't iterated over yet? Are they still in memory in MongoDB?
[03:26:56] <symbol> Aye, nvm, late night.
[05:36:06] <sputnik13> quick question, if I'm using a 32 bit client to connect to a 64 bit server am I still limited to 2GB?
[05:48:05] <Boomtime> sputnik13: not on the server
[05:49:01] <sputnik13> Boomtime: right, so even using a 32 bit client from say an embedded arm device I can push data to and address data in databases greater than 2 GB in size as long as the server is 64 bit, correct?
[05:49:42] <sputnik13> I was confused because even connecting to a 64 bit server, the client throws a warning about a 2GB limit because the client is 32 bit
[05:51:00] <Boomtime> what is the warning? can you paste the whole line here?
[05:51:14] <Boomtime> also, what OS is the server?
[05:52:49] <sputnik13> the server is windows
[05:53:12] <Boomtime> ok, at a mongo shell run this: db.serverBuildInfo().bits
[05:53:50] <Boomtime> the trick with windows is that you can run the 32bit build on a 64bit OS because windows has a nifty abstraction layer
[05:54:29] <sputnik13> Boomtime http://pastebin.com/4Hzdddyx
[05:54:32] <sputnik13> that's the error I saw
[05:55:03] <Boomtime> yep, you're running the 32bit build of mongodb
[05:55:14] <sputnik13> oh perfect
[05:55:15] <Boomtime> you downloaded it on a 32bit machine right?
[05:55:29] <sputnik13> no idea, I'm trying to remotely support someone
[05:55:31] <sputnik13> I will relay this information to them
[05:55:34] <Boomtime> download it again, and be sure to select the 64bit build if it somehow doesn't autodetect
[05:55:36] <sputnik13> thank you very much
[05:55:53] <Boomtime> remember the mongo shell command:
[05:55:54] <Boomtime> db.serverBuildInfo().bits
[05:56:06] <Boomtime> that will print either 32 or 64
[05:56:15] <Boomtime> it will tell you exactly what the server thinks it is
[06:28:16] <macwinner> dont know why i'm finding it so difficult to get a simple promise based wrapper around gridfs
[06:28:43] <macwinner> anyone have any pointers? haven't found anything with google that I quite like
[06:38:22] <Boomtime> macwinner: do you mean the node promise module? (i have no experience with that so i might be saying it wrong)
[07:58:13] <frankS2> Does mongodb have a file where I can store my admin password in order to not type it in? Like mysql's my.cnf?
[07:59:08] <Boomtime> you mean for the mongo shell?
[08:00:15] <Boomtime> http://docs.mongodb.org/manual/reference/program/mongo/#files
[08:06:10] <frankS2> Boomtime: I think keyfile might be what im looking for
[08:06:38] <frankS2> hm no
[08:39:57] <bogn> mocx, Zelest: what's the DO channel? You sparked my interest.
[08:40:31] <Zelest> digitalocean
[08:40:59] <bogn> ah, so actually not that thrilling
[08:41:05] <Zelest> :)
[08:41:11] <Zelest> sorry to disapoint you :)
[08:41:23] <Zelest> disappoint*
[08:41:31] <bogn> it's not your fault, that I'm easily thrilled
[08:41:57] <Zelest> May I ask from where in DE you come from?
[08:42:14] <Zelest> Odd way of writing that centence.. but meh
[08:42:19] <Zelest> sentence?
[08:42:31] <bogn> Frankfurt
[08:42:37] <Zelest> Ah
[08:42:48] <Zelest> I did take a pic from there! :D
[08:42:59] <Zelest> Was on a little eurotrip last 2 weeks
[08:43:32] <puppeh> is there an "create or update if exists" in the ruby mongo driver v2?
[08:43:40] <Derick> yes
[08:43:43] <Derick> it's called upsert
[08:43:49] <Derick> it's functionality in the database
[08:43:53] <bogn> What have you seen in Frankfurt? PM?
[08:43:59] <puppeh> in v1.8 I could do collection.save({_id: 2, foo: "bar"})
[08:44:40] <bogn> puppeh, that's upsert
[08:44:54] <bogn> update_one also has the upsert flag
[08:44:57] <Derick> let me show an example (in shell)
[08:45:31] <puppeh> I've noticed that the equivalent is not: coll.find({_id: 5313}).update_one(stats: "B", upsert: true) as I expected
[08:45:56] <puppeh> instead, I must do: coll.find({_id: 5313}).update_one({_id: 5313,stats: "B"}, upsert: true) is that right?
[08:46:04] <puppeh> ie. I must re-state the _id
[08:46:19] <Derick> db.col.update(( { id: 1 }, { $inc: { counter: 1 } }, { upsert: true } );
[08:46:32] <Derick> puppeh: that looks right
[08:46:41] <Derick> no
[08:46:45] <Derick> no find()
[08:47:03] <puppeh> Derick what you wrote isn't supported in the ruby driver
[08:47:08] <puppeh> https://api.mongodb.org/ruby/current/Mongo/Operation/Specifiable.html#update-instance_method
[08:47:14] <puppeh> I can't find any #update method that operates in collections
[08:47:33] <Derick> coll.update_one( { _id: 5313 }, { $set: { 'stats': 'B' } }, { upsert: true } );
[08:47:41] <Derick> but yeah, no clue what the ruby interface is like
[08:48:54] <puppeh> yeah there isn't an #update_one method on collections
[08:49:02] <puppeh> http://docs.mongodb.org/ecosystem/tutorial/ruby-driver-tutorial/#updating
[08:53:54] <girb> hi
[08:55:24] <girb> new to mongodb .. can I have a WiredTiger as storage engine in secondary keeping old MMV1 in primary
[08:55:30] <Derick> yes
[08:56:47] <girb> Derick: there won't be an issue with oplogs right .. I'm using mongo 3.0.3
[08:57:16] <Derick> nope
[08:57:33] <Derick> the oplog, although stored in MMAPv1/WT, on the wire is the same
[09:08:46] <puppeh> I'm curious why they made the API that way, regarding #insert_one and insert_many
[09:08:50] <puppeh> why not a single #insert method?
[09:08:57] <puppeh> (talking about the new ruby driver)
[09:25:16] <zamnuts> puppeh, all drivers are now aligned with insert_one and insert_many, ruby isn't the only one, just fyi
[09:31:50] <puppeh> oh
[09:31:52] <puppeh> ok
[11:21:46] <adsf> is it possible to get the state of a connection in mongo 3 java driver?
[11:22:05] <adsf> really struggling with in relation to quartz
[11:22:15] <adsf> if i close a connection at the end of a job other jobs seem to be impacted
[12:10:18] <jamieshepherd> I have FILMS, and I have REVIEWS - Should I make reviews part of my films collection (e.g. film { reviews [ ] }, film { reviews [ ] } or its own collection which is related to films?
[12:10:26] <jamieshepherd> Not sure about best practices here
[12:33:59] <deathanchor> depends on what your user stories are.
[12:39:28] <tejasmanohar_> how do i generate mongodb sslCert
[12:39:50] <tejasmanohar_> im trying to connnect w/ an ssl cert to compose.io via mongoose... i have sslKey- it's their RSA public key file, i suppose?
[12:39:52] <tejasmanohar_> but cert
[12:40:00] <tejasmanohar_> ?
[12:49:56] <d-snp> hi
[12:50:04] <d-snp> could this be a dead lock situation in the chunk migrator? http://pastie.org/private/3env0xb5yx4u753vjoltw
[12:50:28] <d-snp> they've both been working for a really long time, and are waiting for a lock
[12:50:39] <d-snp> one a read, the other a write
[13:00:33] <puppeh> how can I set a write concern on a `coll.find(_id: 2).delete_many` operation?
[13:01:01] <puppeh> I was able to do this in the 1.8 driver by doing `col.remove(_id: 2, w: 1)`
[13:01:07] <puppeh> but there's no remove() method in the ruby driver v2
[13:04:00] <cheeser> does delete_many optionally take parameters
[13:04:02] <cheeser> ?
[13:04:20] <puppeh> nope
[13:07:03] <tejasmanohar_> my mongodb managed host, compose, gave me an ssl public key but no cert.. can i generate this based on the key?
[13:11:24] <grkblood13> is there a way to use mongoimport to import a json file from mongoexport that has had the _id fields removed and have it only import entries if it's not a duplicate of an entry that already exists?
[13:11:30] <tejasmanohar_> https://www.compose.io/articles/going-ssl-with-compose-mongodb-plus/ compose.io is saying all you need to connect is -sslCAFile blah.pem in the command
[13:11:57] <tejasmanohar_> w/ the ssl pub key starting w/ `-----BEGIN CERTIFICATE-----` in the pem file
[13:12:26] <tejasmanohar_> but mongoose connect dboptions are sslCert and sslKey
[13:12:54] <tejasmanohar_> which one is this? the compose.io dashboard says "ssl public key" and the file contents say "-----BEGIN CERTIFICATE-----" so its hard to say
[13:12:59] <tejasmanohar_> and do i need the other value?
[13:34:44] <trupheenix> Is it possible to do a multikey 2dsphere index?
[13:34:59] <trupheenix> and also be able to run a geoNear aggregation on such a collection?
[15:31:39] <ito> hi. are clients without the --ssl option set able to connect to a mongo server once ssl is turned on on the mongo server?
[15:32:11] <StephenLynx> I don't think so.
[15:32:18] <StephenLynx> but I am not sure.
[15:33:18] <Derick> no, I don't think so either
[15:35:17] <ito> so once ssl is enabled it's mandatory for all clients. alright... thanks for the info StephenLynx, Derick!
[16:19:42] <symbol> Considering the dynamic nature of collection...would it be bad schema design to have a users collection with some users having a field admin set to true and then just leaving the admin field out of normal users?
[16:20:14] <symbol> I suppose that wouldn't be very explicit and could lead to confusion.
[16:26:25] <EXetoC> I try to not exclude fields conditionally
[16:27:17] <EXetoC> and querying for false is a little nicer than {$ne: true}
[16:37:45] <symbol> True - thanks EXetoC
[18:21:57] <schuranator> I have a question about authentication. I created a new user and password but when I enable authentication in the config and restart the user can't login
[18:26:31] <cheeser> you might need to specify the authentication database.
[18:26:50] <cheeser> http://docs.mongodb.org/manual/tutorial/enable-authentication/
[18:28:29] <schuranator> So my goal is pretty basic. I want to be able to remote into my mongodb using robomongo that needs a login and password. But it seems like when I enable authentication providing the username and password fails.
[18:58:59] <akoustik> can i set my replica read preference in my mongod.conf, or does it have to be set by clients?
[19:01:17] <cheeser> servers don't have read preferences so you need to configure your clients
[19:02:21] <akoustik> oh yeah, i guess it doesn't even make sense to have the hosts specify read prefs themselves now that i think about it. thanks.
[19:03:58] <cheeser> np
[19:19:09] <MacWinne_> is this a bug in mongo GridStore docs for the unlink method: https://mongodb.github.io/node-mongodb-native/api-generated/gridstore.html
[19:19:22] <MacWinne_> it's saying to open the file with the 'r' flag
[19:19:48] <MacWinne_> does this need to be the 'w' flag for being able to delete?
[19:24:53] <StephenLynx> let me see what I do when I delete stuff from mongo
[19:25:33] <StephenLynx> hm
[19:25:40] <StephenLynx> it seems the node driver abstracts that.
[19:27:11] <MacWinne_> StephenLynx, are you using the Gridfs stuff?
[19:28:12] <StephenLynx> yes.
[19:28:23] <StephenLynx> my system revolves around gridgs.
[19:28:28] <StephenLynx> gridfs*
[19:35:24] <MacWinne_> StephenLynx, do you use the unlink method?
[19:35:32] <StephenLynx> yes
[19:35:49] <StephenLynx> mongo.GridStore.unlink(conn, name, function deleted(error)
[19:36:13] <StephenLynx> conn is the database object I get from the connection.
[19:39:05] <MacWinne_> strange.. my other functions like GridStore.writeFile work
[19:39:28] <MacWinne_> the unlink is not. i'm wondering if it's because I'm doing it by filename isntead of _id
[19:39:37] <StephenLynx> I do by filenames too.
[19:40:50] <MacWinne_> var gfs = new GridStore(conn.db, `${filename}`, "r", {root: 'fs'} ); gfs.unlink(function(err,result){...}))
[19:40:57] <MacWinne_> see anything weird there?
[19:43:18] <StephenLynx> ah
[19:43:19] <StephenLynx> wait
[19:43:32] <StephenLynx> you are actually opening it
[19:43:34] <StephenLynx> you don't do that.
[19:43:36] <StephenLynx> it is static
[19:44:42] <MacWinne_> oh.. i'm kinda seeing what you are saying.. lemme try it your way
[19:44:58] <m4k> What is best to have master collection ? something which we create master_tables in RDMS
[19:45:05] <StephenLynx> what
[19:48:03] <MacWinne_> StephenLynx, you're the man!
[19:48:18] <MacWinne_> thanks so much.. i think I would have spent all day banging my head on this
[19:48:42] <StephenLynx> https://www.youtube.com/watch?v=NisCkxU544c
[19:55:09] <m4k> Is it good idea to keep a master collections ? For e.g. I have a media and there is different types of it like, for front, gallery, pdf, video etc. Do I create reference master collection for media type?
[19:55:58] <StephenLynx> can't you just have a field with a string that says the type?
[19:56:04] <StephenLynx> do you need metadata on the type?
[19:56:17] <m4k> Yes there will be few.
[19:56:41] <StephenLynx> if you don't need to query the metadata along it, its ok to have a collection with the media types.
[19:56:48] <StephenLynx> if you do, I would duplicate said data.
[19:57:34] <m4k> But the problem is if meta data changes then I have to change in each documents.
[19:58:03] <m4k> I am just trying to understand best practice.
[19:59:15] <StephenLynx> yeah, if the metadata changes
[19:59:23] <StephenLynx> then its not great to have it duplicated.
[19:59:28] <StephenLynx> what kind of metadata it is?
[20:00:11] <m4k> some type of icons and name of type.
[20:00:23] <StephenLynx> can't you let the application decide that?
[20:00:26] <m4k> also there is parent child relation ship
[20:00:32] <StephenLynx> you are coupling front-end with database.
[20:00:56] <StephenLynx> its much easier to decide these details on run time.
[20:01:01] <StephenLynx> rather than store then on the database.
[20:01:26] <m4k> you are right
[20:01:36] <StephenLynx> in my software I just store a string and everything is decided on the value of said string.
[20:02:52] <m4k> But what about the parent child relations.
[20:03:14] <StephenLynx> what child?
[20:03:18] <StephenLynx> you just need it as a string.
[20:03:29] <StephenLynx> type: "something"
[20:03:34] <StephenLynx> KIS
[20:06:56] <m4k> This is for media, I have another field called Amenities : it got parents like Out door/indoor/flooring etc for them we need to have parent child relationship.
[20:40:58] <Eloquence> Hi. Trying to get mongo working on Ubuntu 15.04, reinstalled packages to 2.6.3 with --reinstall, service always fails to start with "status=4/NOPERMISSION" in syslog - any ideas?
[20:42:29] <StephenLynx> are you installing from 10gen repositories?
[20:43:35] <Eloquence> nope, I think these are the default vivid packages
[20:43:44] <Eloquence> erik@empathy:/etc/apt/sources.list.d$ cat mongodb-org-3.0.list
[20:43:44] <Eloquence> # deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse # disabled on upgrade to vivid
[20:43:57] <StephenLynx> the default packages are crap.
[20:44:08] <StephenLynx> use the ones from 10gen with the updated version
[20:44:11] <StephenLynx> 2.6 is ancient
[20:44:46] <Eloquence> https://jira.mongodb.org/browse/SERVER-17742 is closed WONTFIX though indicating that 15.04 is not officially supported?
[20:45:40] <akoustik> yeah i was about to say, the official mongo packages are only supported on LTS releases
[20:45:40] <StephenLynx> I can't see how is 10gen responsible for ubuntu's default packages.
[20:46:15] <Eloquence> I'm not looking to point fingers, just trying to get it to work :)
[20:46:59] <StephenLynx> already told you, use 10gen's packages.
[20:47:26] <akoustik> StephenLynx: in fairness, he did say he's on 15.04, so he won't be able to get a supported package from 10gen.
[20:47:36] <StephenLynx> hm?
[20:47:40] <StephenLynx> let me check that.
[20:47:47] <akoustik> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[20:48:06] <akoustik> MongoDB only provides packages for 64-bit long-term support Ubuntu releases.
[20:48:11] <StephenLynx> aaah yeah
[20:48:12] <akoustik> endquote
[20:48:15] <StephenLynx> nvm :^)
[20:48:17] <akoustik> heh
[20:48:33] <akoustik> Eloquence: so yeah, how important is 15.04 to you? heh
[20:49:08] <akoustik> if the answer is "very" then you can choose to be a cowboy and try to use the package anyway, dunno how that works.
[20:49:59] <akoustik> but with this NOPERMISSION thing, it might be an issue of file permissions/ownership in directories used by mongo. say, maybe /var/log, /var/lib/mongo, /var/run/mongodb or something like that.
[20:51:17] <Eloquence> *nod* will poke a bit more. the recommended workaround is to use debian wheezy debs, which I might try
[20:51:37] <akoustik> gl, hf ;)
[20:51:53] <Eloquence> :)
[21:01:14] <Eloquence> OK, in case anyone runs into the same thing -- needed to upgrade with mongod --upgrade & fix permission errors tracked down via mongodb logs. seems fixed now but I take the point that 2.6.3 is ancient.
[21:01:51] <akoustik> nice.
[21:02:16] <akoustik> yeah, funny that you have to downgrade your OS in order to upgrade your DB, but that's life.
[21:02:19] <akoustik> i guess
[21:26:31] <akoustik> so i'm curious. when my replica set is in a "bad" state, say maybe some hosts are down or something, i see the running mongod instances spitting out huge numbers of attempted connections, to the remote host as well as locally. should i be concerned? would it be smart to reduce the rate of connection attempts? does it matter?
[21:45:51] <fejes> Hello: I have a basic question. I’m doing a query against a large collection, and I need to do a query to find out if a given record exists ({_id:{$in:[a,b,c,d,e….]}}, {_id:1})
[21:46:13] <fejes> Is there a more efficient way to do that query?
[21:46:37] <fejes> I’m currently running with batches of 5000 at a time, but I could change that.
[21:47:29] <fejes> I also see that mongo is reading records from the disk, despite the fact that all I need to touch is the _id field, so it should be a covered query, though it doesn’t seem to be acting that way.
[21:47:42] <fejes> anyone have any insight they can share?
[21:50:47] <akoustik> so your array your checking membership of - that has 5000 elements?
[21:52:40] <fejes> yes
[21:53:15] <fejes> I can make it smaller, but I don’t expect checking 1-by-1 to be any faster
[21:53:57] <fejes> 5000 is probably too much, but is there a better way to do it that uses the idhack plan without doing it one-by-one?
[21:54:37] <akoustik> nah, i mean that's what you gotta do if that's what you gotta do. i would try to avoid it though. for example, if it were an array of numbers, i would see if i could use numerical range testing or comparison.
[21:55:14] <akoustik> oh wait
[21:55:23] <akoustik> haha sorry i'm distracted, you're checking IDs...
[21:57:33] <fejes> yep - only on the _id.
[21:57:37] <fejes> (-:
[21:57:43] <fejes> you’d think this would be fast and easy,
[21:58:08] <akoustik> so it depends on the datatype you're using as an ID. in general, i don't think there's a better method than just using $in and maybe trying to make sure your array is ordered. maybe somebody else has some wisdom. i'm a noob.
[21:58:08] <fejes> however, I keep seeing that mongo is reading from disk, and returning a much higher reslen than I expect,
[21:58:21] <fejes> ah… no worries.
[21:58:29] <fejes> Glad you’re willing to look at it.
[21:58:42] <fejes> and you’re the only one who’s answered so far, so much appreciated (-:
[21:59:01] <akoustik> as far as reading from disk... do you see differences in disk access depending on how large your array is?
[21:59:17] <fejes> array is already ordered…. so yeah. and the Id’s are strings (hashes)
[21:59:53] <akoustik> yeah i'm curious anyway, i'm about to deploy a service that's going to be dealing with lots of performance issues
[22:00:16] <akoustik> so no worries
[22:00:17] <fejes> It looks like a 20%-40% nscannedObjects ratio.
[22:00:37] <fejes> well, maybe I can help you out, if you have any issues.
[22:00:51] <fejes> This one, though, is stumping me because it’s so simple, and yet not performing as expected.
[22:01:41] <akoustik> heh, thanks. i have my stuff "working" at least, and right now i'm just concerned with getting it out the door. i just lurk and see if i can learn or help.
[22:02:06] <fejes> nice..
[22:02:08] <fejes> I should do the same.
[22:02:17] <akoustik> yeah i get ya. i would just play around with various scales of your query array and see what happens.
[22:02:23] <fejes> for the record, this is what the log shows, in case it stimulates anything for others:
[22:02:24] <fejes> 2015-07-09T14:14:40.348-0700 I QUERY [conn7440] warning: log line attempted (122k) over max size (10k), printing beginning and end ... query omicia_cache_4_2_0_production.cache query: { _id: { $in: [ "00161cb0b013748725bb5898be2c2391", "00324fe325a361098afda47e4aa6c03b", "00446e81510a3a731f6a9deaa8fbde0f" ] } } planSummary: IXSCAN { _id: 1 } cursorid:36274368818 ntoreturn:3477 ntoskip:0 nscanned:1695 nscannedObjects:848 keyUpdates:0 writeConflicts:0
[22:02:24] <fejes> numYields:43 nreturned:848 reslen:4196057 locks:{ Global: { acquireCount: { r: 44 } }, Database: { acquireCount: { r: 44 } }, Collection: { acquireCount: { r: 44 } } } 2439ms
[22:02:50] <fejes> that reslen seems way too large for returning 1700 hashes.
[22:04:03] <fejes> each hash should not be 2.5kb. it’s only a 32 character hash.
[22:04:45] <akoustik> sure does.
[22:05:17] <fejes> which means that mongo is fucking something up spectacularly.
[22:05:46] <akoustik> have you played with querying fields besides _id?
[22:06:12] <fejes> not for this particular context.
[22:25:03] <kryo_> i have a collection with a unique integer field called "num" with the requirement that no 4 elements will ever have adjacent "num" fields (i.e. [1,3,4,5[1,2,3,4])
[22:25:24] <kryo_> (i.e. [1,3,4,5] is ok but not [1,2,3,4])
[22:25:43] <kryo_> sorry, pressed enter accidentally while typing :P... what's the best way to enforce this requirement?
[22:26:24] <fejes> in code before you send it to the db.
[22:27:15] <kryo_> right, but the only way i can think of is to execute 4 different "count" queries
[22:27:29] <kryo_> is there a way to do it in one query?
[23:34:50] <Doyle> What happens when the mongod.log volume hits 100%? All that's on it is the mongod.log file.
[23:35:05] <Doyle> I emptied it, but it's not showing any activity after that.
[23:46:11] <Doyle> Any assistance would be appreciated