PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 1st of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:28] <Freman> after that the memory usage creaps up to 51222608 and hangs out around there
[00:01:03] <Freman> so you're probably right, it's not mongo just mongo and php
[00:01:18] <Boomtime> the documents involved and php
[00:01:20] <Freman> (ie: php hasn't allocated $x, and mongo needs $y to finish loading the module properly)
[00:01:27] <Freman> s/module/document/
[00:02:00] <Freman> probably a document with lots of named arrays in it, php hates those
[00:02:31] <Freman> my apologies for blaming mongo
[00:03:20] <Boomtime> no worries, i guess you can figure out ways to combat this now
[00:03:21] <Freman> in the real world this will never be run, it's just a proof of concept to convince the bosses to let us stop sending meaningless data to mongo that would be better stored in influx or something to get pretty graphs
[00:03:37] <Boomtime> fair enough
[00:03:42] <Freman> plan being to reduce the size of these documents and the quantity of them
[00:09:54] <Freman> for reference it stops dying at 512 meg
[00:11:14] <Boomtime> well.. at least you know what it takes
[04:38:40] <grandy> confused about the syntax of using aggregation to count users based on a sub-document's attribute...
[04:55:03] <Boomtime> hi grandy, that description sounds like it could be done with a query, but aggregation would be fine too - can you pastebin example document and what you want to count?
[04:55:20] <Boomtime> also, let us know what you've tried
[05:03:14] <grandy> Boomtime: thanks much, just figured it out, I think the idea of the structure of method: {... args ... } was a bit counter-intuitive when looking at the stuff i typed in
[06:45:04] <angular_mike_> I'm struggling with finding data on length and content constraints for different data types (string, date, document, etc.) in the documentation. Anyone can help?
[09:16:30] <pamp> Hi
[09:17:09] <pamp> is there any tests to compare .net driver vs java driver performance?
[10:45:37] <iszak> Can I create a databaes after setting up a replica set?
[11:27:30] <_Rarity> Hello. Can someone help me solve a problem when connecting to mongodb remotely?
[11:29:11] <_Rarity> When on host machine, I can easily "$ mongo localhost:27017 -u USER -p PASS". But when trying to login remotelly, I get the error " 18 { ok: 0.0, errmsg: "auth failed", code: 18 }"
[11:30:11] <_Rarity> I have made sure that I specifically use the "admin" database for login. I have also made sure that the port is open and that the server responds on it
[12:00:38] <adsf> if i have a mongocursor in java that i ahve made changes to, how do i go about updating the records?
[12:00:42] <adsf> mongo 3
[12:06:41] <cheeser> made changes to the cursor?
[12:07:16] <adsf> my terminology is most definitely wrong :p
[12:07:35] <adsf> made a query, got back cursor, used iterator
[12:07:58] <adsf> in a while loop made some changes. Can post code if you want :)
[12:08:25] <cheeser> made some changes to the documents that came back, yes?
[12:08:45] <adsf> yup
[12:08:55] <adsf> they went through the while loop. set some values in them
[12:09:05] <cheeser> you'll have to save those manually back to the db
[12:09:28] <cheeser> but if you didn't keep a reference to them as you iterate the cursor, the GC has them.
[12:09:55] <adsf> i have a reference before i iterate
[12:10:06] <adsf> butim unsure how to turn the cursor back into something i can use
[12:10:11] <adsf> lemme gist it real quic
[12:10:32] <cheeser> you wouldn't use the cursor to write
[12:11:41] <adsf> https://gist.github.com/anonymous/7737912891f028c0c7fd
[12:12:37] <adsf> makes sense not to use cursor to write
[12:13:52] <cheeser> how are you getting a Mongo_Game out of the cursor?
[12:14:29] <adsf> just using the iterator. How i could for other iterators, but im not 100% if it applies here :)
[12:14:32] <adsf> hence my confusion :)
[12:14:50] <cheeser> have you tried running this code yet? or just in the writing process?
[12:15:01] <adsf> code reads fine, gets results
[12:15:07] <adsf> iterate seems to work fine too
[12:15:12] <adsf> no errors, i just cant write it back :)
[12:15:25] <adsf> model seems to be working
[12:15:41] <cheeser> did you register any codecs for Mongo_Game?
[12:15:47] <adsf> yup, registered
[12:15:53] <cheeser> ah. there we go.
[12:16:04] <adsf> im listening!
[12:16:12] <cheeser> because out of the box, the java driver doesn't support POJOs like that so I was curious.
[12:16:20] <adsf> ahh
[12:16:31] <cheeser> i'm currently adding a feature that will let you generically pull POJOs out like that
[12:16:40] <cheeser> without writing a custom codec, that is.
[12:16:51] <adsf> thats handy
[12:17:01] <adsf> currently though, i was trying to just use the iterator again to construct a list
[12:17:03] <cheeser> hopefully :)
[12:17:32] <adsf> which it doesnt seem to like :p
[12:17:59] <adsf> looking for WriteModel of some type, and i couldnt find many examples on google!
[12:18:09] <adsf> List<WriteModel<Mongo_Game>> update_mongo = new ArrayList<WriteModel<Mongo_Game>>();
[12:18:12] <adsf> something like that im guessing
[12:23:19] <adsf> i think i may figured it out, happy days
[13:00:45] <thikonom> hi everyone, I am currently planning to upgrade to mongo 3.0 . Does anybody know if WiredTiger needs extra space at the time of the restoring phase ?
[13:20:17] <fxmulder> how much memory does mongodb use when initializing a new replica member? I have 32G of ram and 64G of swap in this thing and it died from being out of memory
[13:47:42] <adsf> cheeser: would would be amazing would be a json2model for codecs :p
[13:48:53] <adsf> c 13
[13:48:56] <adsf> fail
[14:28:38] <d-snp> Axy: that step "replace the binaries" was exactly what I meant by that installing mongodb 3.0 will automatically remove the mongodb 2.6
[14:29:12] <Axy> d-snp thanks
[14:35:39] <cheeser> adsf: it'll be full duplex support for saving/loading POJOs
[14:39:32] <adsf> cheeser: sounds good, something like json2pojo
[14:39:33] <adsf> so handy
[14:40:57] <GothAlice> Thanks, JIRA, for sending two copies of every notification my way…
[14:41:17] <GothAlice> Sudden burst of activity on three tickets and I've got 184 unread messages. >_<
[14:42:58] <scottbessler> leap second made my replset go nutso
[14:43:01] <scottbessler> 2015-06-30T19:59:59.001-0400 [conn864167] getmore local.oplog.rs cursorid:958342653321 ntoreturn:0 keyUpdates:0 numYields:0 locks(micros) r:59 nreturned:1 reslen:173 1271309325ms
[14:43:21] <scottbessler> and now its in some crazy state
[14:43:22] <GothAlice> That's a rather surprising request latency, there.
[14:43:37] <scottbessler> where the secondaries are super slammed with load
[14:43:48] <scottbessler> and lagging on replication since that point
[14:43:59] <scottbessler> @GothAlice ya dont say ;)
[14:44:12] <cheeser> adsf: it's essentially Morphia in the driver. https://github.com/mongodb/morphia
[14:44:52] <adsf> ahh cool
[14:44:52] <GothAlice> scottbessler: Alas, none of my own clusters noticed the rather unusual situation last night. (Also so happy that today is a holiday, in case there were issues.)
[14:48:51] <GothAlice> Big bada boom.
[14:55:57] <pamp> Is there any benchmark comparing .net and java driver performance?
[15:00:52] <grandy> hmm, trying to figure out how to sum the price and group by item.name .. any advice? document looks like: {name: 'henry', items: [{name: 'a', price: 1}, {name: 'b', price: 3}] } ...
[15:02:40] <grandy> hmm, trying to figure out how to sum the price and group by item.name .. any advice? document looks like: {name: 'henry', items: [{name: 'a', price: 1}, {name: 'b', price: 3}] } ...
[15:07:35] <adsf> has the logger class changed for java in mongo 3?
[15:07:47] <adsf> having trouble setting a level to not show a lot of the chatter
[15:33:46] <grandy> ^^ wondering if someone could help me understand that query
[16:55:49] <schmichael> is there a way to log all queries hitting a server w/2.6.7? i try using setProfilingLevel(2,0), but it still only seems to log queries that take >=1ms
[16:56:24] <GothAlice> schmichael: If your server is configured for operation in a replica set (i.e. has an oplog) you should be able to tail that to watch all activity.
[16:56:52] <honigkuchen> can mongodb also be used as an triple store database and if yes, does it have disadvantages to normal triple store databases?
[16:57:03] <schmichael> GothAlice: oplogs include queries? i assumed they only included mutations
[16:57:29] <GothAlice> honigkuchen: MongoDB can, technically, be used as a method of storing n-tuiples. It's not optimized for this use, however.
[16:57:37] <GothAlice> n-tuples, that is.
[16:57:47] <GothAlice> schmichael: I'm double checking that at the moment.
[16:57:55] <honigkuchen> what does this mean, being not optimized for that?
[16:58:01] <honigkuchen> just problems of speed?
[16:58:07] <GothAlice> honigkuchen: No effort was spent to optimize that use case. (None.)
[16:58:26] <honigkuchen> speed or practical programming?
[16:58:45] <GothAlice> Just that there are multiple ways one could implement that, each with specific trade-offs unique to MongoDB.
[16:59:25] <GothAlice> {_id: ObjectId(…), tuple: [1, 2, 3]} vs. {_id: …, tuple: {foo: 1, bar: 2, baz: 3]}, etc.
[17:00:05] <GothAlice> One could simply treat a document as a "named tuple".
[17:01:16] <GothAlice> The difficulty comes down to exactly how you want to use those values. Set operations? (I.e. intersections between records?)
[17:01:40] <honigkuchen> is mongodb ok for ontologies?
[17:02:03] <honigkuchen> because I have heared that ontologies need triple store databases
[17:02:28] <fxmulder> is there a recommended amount of memory for initial replication with a replica set member?
[17:03:25] <GothAlice> honigkuchen: It is until they become extremely hierarchical (no recursive queries in MongoDB, so careful data design is needed) or if the structure is better modelled as a graph, for which a real graph database like Neo4j _will_ do a better job. A triplestore is just a form of entity-attribute-value, that is, a hack to work around the limitations of SQL. Limitations that don't apply to MongoDB.
[17:04:12] <GothAlice> Why have a series of records each describing one attribute of an object instead of having a single object with multiple attributes? (Which MongoDB does natively. That's the entire point, actually. ;)
[17:05:09] <honigkuchen> GothAlice: this was very good information
[17:05:38] <fxmulder> or is there a way of calculating memory requirements?
[17:06:15] <GothAlice> fxmulder: MongoDB uses memory mapped files, so the "optimal" available memory is the same as the on-disk size, + room for "working memory" to answer queries and manage connections.
[17:06:34] <GothAlice> Working memory size will fluctuate with connection count and activity.
[17:06:56] <fxmulder> well I've been trying to get a new replica set member going and mongodb keeps dying due to OOM
[17:07:01] <GothAlice> fxmulder: https://gist.github.com/amcgregor/4fb7052ce3166e2612ab#memory
[17:07:08] <GothAlice> Using WiredTiger?
[17:07:28] <fxmulder> how can I tell?
[17:07:45] <GothAlice> That means you probably aren't. It'd be the storageDriver argument on the mongod command line, or in the mongod.conf file.
[17:08:00] <fxmulder> no then we are not
[17:08:35] <GothAlice> Hmm. Unless _extremely_ constrained or using experimental code, I've never seen mongod trigger the oom-killer.
[17:08:50] <GothAlice> Running on a 256MB VM?
[17:08:51] <GothAlice> :P
[17:09:40] <fxmulder> no, this has 32G of ram adn 64G of swap
[17:09:50] <fxmulder> physical machine
[17:10:06] <GothAlice> Then I'd open a JIRA ticket to get some upstream assistance diagnosing that. An OOM event with that much hardware should simply not happen.
[17:12:14] <GothAlice> fxmulder: I have to ask, 'cause it's a possibility: are you running 32-bit mongod?
[17:12:35] <GothAlice> That would certainly run into memory issues.
[17:13:12] <fxmulder> no, 64 bit machine
[17:13:36] <GothAlice> 64-bit machine, 64-bit kernel, 64-bit userland, and 64-bit service. Each of those can drop down into 32-bit on most architectures. ;P
[17:13:46] <GothAlice> I.e. running a 32-bit application on a multilib 64-bit host.
[17:14:11] <fxmulder> ii mongodb-org 3.0.4 amd64 MongoDB open source document-oriented database system (metapackage)
[17:14:13] <fxmulder> its 64 bit
[17:17:14] <GothAlice> Definitely open a ticket, then. If you have commercial support, make sure you submit the ticket that way to get the attention it deserves. :)
[17:17:31] <fxmulder> hmm, I had plenty of swap space free when this died
[17:17:47] <fxmulder> Total swap = 64802808kB, Free swap = 64125816kB
[17:18:26] <fxmulder> I will open a ticket though, thanks
[17:27:39] <ejb> GothAlice: ping
[17:30:53] <GothAlice> ejb: pong
[17:31:18] <ejb> GothAlice: may I pm? re: some old code you helped me with
[17:31:27] <GothAlice> Yeah, no worries.
[18:11:41] <Havalx> Hello.
[18:18:46] <Havalx> when indexing documents; can I index by key name and/or value name?
[18:20:38] <cheeser> indexes are created based on the values of the fields...
[18:33:05] <cittatva> hello! can anyone help me mount some EBS snapshots from mongolab backup on an ec2 machine so I can verify the backup? I've gotten as far as creating volumes and mounting them, but mdadm doesn't see and recogniseable superblocks
[18:53:13] <cittatva> I figured it out - needed to use lvm2 instead of mdadm
[19:01:01] <ericmj> according to this: https://github.com/mongodb/specifications/blob/master/source/crud/crud.rst#update-vs-replace-validation drivers should validate update/replace documents
[19:01:16] <ericmj> but when testing the node driver it doesn't seem to do any validation
[19:01:35] <keeger> hello
[19:02:06] <keeger> i am planning to make a locking collection, to use for some application transaction support
[19:02:25] <keeger> and i was reading that mongo doesn't do a good job of managing disk space on deletes?
[19:02:51] <keeger> i'd be doing insert key random value, and setting TTL / deleting when done. then repeat
[19:04:00] <keeger> would mongo be smart about just re-using the allocated space that was deleted?
[19:17:30] <cheeser> keeger: yes, it would.
[19:22:52] <adsf> so i have a query, which i take the return and make a List<My_model>, then i do some work on the list, then i want to bulkwrite by passing a List<WriteModel<My_Model>> but im getting duplicate key errors (because of an index)
[19:23:01] <adsf> doesnt bulkwrite do kind of an upsert
[19:23:04] <adsf> this is in java :)
[19:24:46] <cheeser> using com.mongodb.client.MongoCollection#bulkWrite()?
[19:25:38] <adsf> yeah, collection.bulkwrite
[19:25:55] <adsf> passing a list ofwritemodels
[19:26:40] <cheeser> how are you creating the WriteModels?
[19:26:57] <adsf> https://gist.github.com/anonymous/4f6adc53f52b49af10a7
[19:27:14] <cheeser> yeah. you're telling it to do an insert on each model.
[19:27:20] <cheeser> you want UpdateOne
[19:27:29] <adsf> cus im using just a list?
[19:27:56] <adsf> cant pass list of models to updateone :(
[19:28:14] <cheeser> you'd create a new Model for each item.
[19:28:15] <adsf> or are you saying, dont bulk at all and just do it in a loop?
[19:28:35] <cheeser> take that code and s/InsertOneModel/UpdateOneModel/
[19:29:18] <adsf> oh god, if its that simple
[19:29:20] <adsf> so silly
[19:29:52] <adsf> hrmm
[19:30:19] <adsf> no love, lemme see the error
[19:30:38] <adsf> cant construct with the game
[19:31:03] <adsf> needs a bson filter
[19:32:53] <adsf> gotta modify my list too i think, trying that :)
[19:43:43] <adsf> cheeser: gotta head off but will follow up tomorrow! :) Enjoy your day!
[19:43:50] <cheeser> later
[19:44:14] <adsf> thanks for all your help!
[19:44:17] <cheeser> any time
[20:47:22] <keeger> cheeser, thanks!
[20:54:21] <cheeser> sure
[23:13:13] <hahuang65> anyone here know what we should do if an arbiter keeps complaining that hte other nodes "thinks that we're down"?