PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 6th of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:34:13] <a|3x> hi
[00:35:30] <EXetoC> ho
[00:36:25] <a|3x> i'm having difficulties with the c++ driver
[00:36:47] <a|3x> there appears to be some kind of concurrency issue
[00:43:43] <a|3x> i heave a bunch of threads running, each with their own ScopedDbConnection, but they still manage to step on each other (query returning results when there should be none)
[01:15:04] <Havalx> update - http://pastebin.com/7JehaMWU
[01:52:38] <jecran> hi .... how can i force duplicate _id 's?
[01:55:25] <cheeser> you can't
[01:55:58] <jecran> ok easy enough
[02:25:59] <atnakus> Is there some kind of pager function in mongo shell which allows you to scroll through the results both up and down?
[03:46:27] <cheeser> atnakus: you can page in the shell window to go back. ;)
[04:00:08] <Jonno_FTW> how do I make it so mongod only allows collections from authorized users? ie. you have to have a username/pw to access the db
[04:00:20] <Jonno_FTW> s/ll/nn/
[04:01:27] <cheeser> http://docs.mongodb.org/manual/core/authentication/
[04:05:30] <Jonno_FTW> haha It was allowing anonymous local connections, but now I checked a remote client and it won't let me in
[04:12:50] <pylua> how to optimize mongodb? it use 100% cpu
[04:14:06] <cheeser> do you have enough RAM? might be it's swapping in and out excessively.
[04:56:33] <dimon222> >100% cpu
[04:56:34] <dimon222> wow
[04:56:47] <dimon222> how many cores do u have
[05:04:50] <xDHILEx> Has anyone worked with Twilio ?
[05:26:06] <pylua> dimon222:4 cores
[05:36:19] <EXetoC> pylua: perform bulk operations if necessary, and look into indexing
[05:36:24] <EXetoC> oh and what he said about RAM
[07:41:06] <xDHILEx> I just posted this on SO if anyone can lend a hand. http://stackoverflow.com/questions/31239617/twilio-account-verification-example-app-typeerror-on-server-restart
[09:44:55] <pamp> How can I connect throw .Net driver 2.0 into a mongodb database?
[09:45:33] <pamp> Im missing something in my connection string ? client = new MongoClient(@"mongodb://user:pwd@server:27018/admin")
[10:09:00] <CharlesFinley> Hi, why does this query return 0 results? db.getCollection('companies').find({_id: '559a4c7c18b5df6b032b353f'})
[10:09:54] <CharlesFinley> There is a record with that id
[10:11:36] <d-snp> CharlesFinley: does db['companies'.find({...}) work? of db.companies.find({..}) ?
[10:11:49] <d-snp> I've not seen db.getCollection before, though I suppose it should work
[10:11:58] <CharlesFinley> It does work - shows all companies.
[10:12:08] <CharlesFinley> that's robomongo's default apparently
[10:12:14] <d-snp> ok
[10:12:29] <CharlesFinley> oooh
[10:12:32] <CharlesFinley> here's the fix: db.getCollection('companies').find({'_id': ObjectId('559a4c7c18b5df6b032b353f')})
[10:12:35] <d-snp> right
[10:12:42] <CharlesFinley> lol.
[10:12:43] <d-snp> that was gonna be my next uncertain suggestion :P
[10:12:53] <CharlesFinley> :-)
[10:12:54] <d-snp> oh wait it was not
[10:12:57] <d-snp> objectid.. hm
[10:13:00] <d-snp> I forgot about that
[10:48:57] <iszak> Is there a way to make push with aggregates unique?
[11:58:09] <deathanchor> bl
[12:46:25] <pschichtel> Hi! I'm interested in implementing a paginated list with a mongodb. the collection will have a lot of elements, so I thought about implementing this using seek-pagination. In MySQL I'd use an auto incrementing index field for this, what could I use with mongodb?
[12:55:59] <deathanchor> .sort({ _id : 1 }) ?
[12:56:10] <deathanchor> if you are using ObjectIds.
[12:59:05] <StephenLynx> are _id's sequential?
[12:59:44] <StephenLynx> and afaik, documents are already sorted by default by their insertion order.
[13:05:43] <pschichtel> deathanchor: sure, but the interesting part is the seeking without .skip()
[13:06:20] <pschichtel> I guess I can't use > or < on ObjectId or can I?
[13:06:36] <deathanchor> why not?
[13:06:45] <deathanchor> $gt $lt
[13:07:05] <StephenLynx> I would still use skip and limit.
[13:07:09] <StephenLynx> for pages.
[13:07:27] <StephenLynx> and range for range :v
[13:07:50] <pschichtel> deathanchor: so there is an order defined for ObjectId fields?
[13:07:51] <deathanchor> yeah the key is to use an index so the lookup is fast.
[13:08:15] <deathanchor> http://docs.mongodb.org/manual/reference/object-id/
[13:08:29] <StephenLynx> check if objects are sorted by default by their insertion order.
[13:08:29] <deathanchor> sorting on an _id field that stores ObjectId values is roughly equivalent to sorting by creation time.
[13:08:44] <StephenLynx> if they are, you don't need to sort at all.
[13:09:31] <deathanchor> StephenLynx: $natural is very special
[13:12:47] <EXetoC> you know what mongo needs? more cowbell
[13:18:36] <deathanchor> mongo needs more cake.
[13:18:56] <cheeser> the cake is a lie
[13:19:09] <pschichtel> {$lt: {_id: ObjectId("5577f2221ca883042743771b")}} doesn't seem to work
[13:20:23] <EXetoC> isn't that the wrong format?
[13:20:36] <EXetoC> _id: {$lt: ...}?
[13:22:28] <deathanchor> pschichtel: it's monday... you are forgiven.
[13:22:38] <pschichtel> I seriously don't like schema-less databases...
[13:22:59] <deathanchor> pschichtel: I feel the exact opposite, I don't like schema dbs.
[13:23:39] <pschichtel> I also prefer strongly typed languages. I like my shit to crash when I do stupid things
[13:23:58] <deathanchor> Perl: use strict; use warnings;
[13:25:55] <cheeser> /1/1
[13:27:04] <EXetoC> yeah strictness is nice
[13:27:33] <cheeser> java++
[13:27:34] <cheeser> :D
[13:28:14] <EXetoC> oh no
[13:28:45] <EXetoC> and it's nice not to have to worry about key lengths and so on. I haven't had to care yet though
[13:32:33] <deathanchor> at least there are things like mongoengine where you can add application side enforcement of scheam
[13:32:54] <gemtastic> Did I hear Java? :D
[13:36:33] <EXetoC> I've barely used the language, but it does seem to be very tedious
[13:38:31] <EXetoC> I'll go do something productive
[13:40:06] <EXetoC> l8r m8s. don't sk8 l8
[13:40:54] <pschichtel> Scala > Java
[13:43:15] <EXetoC> bf > php
[13:44:50] <pschichtel> definitly. but I'm not sure where I'd place javascript compared to PHP ...
[13:48:40] <EXetoC> modern statically typed languages prove that static typing doesn't imply less productivity
[14:09:01] <Nomikos> in the CLI client, if you've done a db.foo.find(), it returns a DBQuery object - is there some way to run that again?
[14:21:48] <pamp> Hi
[14:22:05] <pamp> its possible with C# driver 2.0 shard a collection ?
[14:49:21] <jubjub> If i do a query on {name: "john", age: 17}, does mongo perform the query on all docs and short circuit is the name isn't john? Do they check the query params in order?
[15:10:31] <Steve-W> Hi all. I'm using ruby to list collections in a Mongo 2.4 database, and it is only returning the first 101 items. Is there some way of enumerating them all?
[15:18:29] <jubjub> If i am querying to see if a point lies within a polygon which query is fast? $geoWithin or $geointersects?
[15:47:34] <na8flush> hey all, i'm not experienced enough with mongodb to know if this is a bad design pattern or not, so i'm hoping someone might be able to give me some feedback
[15:49:12] <na8flush> on our app, we store user details in an object called UserInfo. When that user authenticates, we generate a token for them, and that gets added as a property to the record. but here's the part i think might have negative performance - we save that object as an entirely new collection, UserInfoWithToken, duplicating all of the info from the UserInfo record
[16:34:08] <asturel> hi, after a while my mongodb server keep throwing 'DBClientCursor::init call() failed'
[16:34:30] <asturel> restart fixes it
[16:34:38] <StephenLynx> it throws it when you tries to do what?
[16:34:44] <asturel> connect
[16:35:08] <asturel> oh may i found why.. Mon Jul 6 18:31:32.809 [initandlisten] connection refused because too many open connections: 819
[16:35:17] <StephenLynx> you don't reuse connections?
[16:35:24] <StephenLynx> yeah
[16:35:54] <StephenLynx> most drivers give you a connection pool
[16:36:01] <StephenLynx> check if your do that.
[16:36:23] <StephenLynx> so you don't have to worry about concurrency, you just reuse it until the application stops
[16:36:44] <symbol> Can someone give me an idea of what sort of issues I could run into with batch updates not being atomic? Does it have to do with interleaving?
[16:37:33] <StephenLynx> IMO, it could perform just some of the updates and then you wouldn't know which ones ran and which didn't?
[16:37:34] <asturel> i use nodejs mongodb native driver
[16:37:44] <StephenLynx> yeah, it gives you a connection pool.
[16:37:46] <StephenLynx> I use it too.
[16:37:54] <StephenLynx> even if the database restarts, you won't know it.
[16:38:06] <StephenLynx> the connection remains valid.
[16:38:25] <symbol> Ah, ok, that makes sense.
[16:39:00] <asturel> u mean keep the connection as global?
[16:39:40] <StephenLynx> as w/e
[16:40:00] <StephenLynx> I have a module that holds the connection and open references to collections on boot.
[16:40:17] <StephenLynx> it ensures all indexes, gather all pointers to collections and then just serves these pointers.
[17:59:37] <ToMiles> Any reason a db.repairDatabase of a 10GB db would be allocationg and filling with zeroes more then 100 ( and counting) data files in the _tmp folder diuring this process?
[18:00:01] <ToMiles> this has already taken up > 200GB of disk space
[18:05:50] <joannac> ToMiles: shouldn't be
[18:06:41] <ToMiles> so something has gone wrong then probably :-)
[18:07:14] <ToMiles> it's a ceilometer (OpenStack) database the repair is running on
[18:24:04] <danijoo> i deleted a ~ 70gb collection from a database. the server is pretty low on disk (~80% in use, all by mongo). I know that i cant reclain that 70gb without doing a database repair (what i cant do because not enough space on disk).
[18:24:45] <danijoo> will the other collections from the same db start to use that 70gb or are they really "dead space" now
[18:26:18] <StephenLynx> afaik, mongo will reuse the space.
[18:27:28] <danijoo> too bad that you cant repair without haven the doubled size left :/
[18:28:08] <cheeser> iirc, WT is better about that.
[18:30:35] <ToMiles> or in my case 20x the orginal size :-)
[18:31:04] <ToMiles> just killed before it filled up my whole disk with useless zero fille data files
[18:51:37] <coderman1> will mongo use multiple processors for a single query on the same node?
[18:53:41] <MacWinner> hi, i currently have a mongo replica set on a 3-node dedicated c;uister.. i'm thinking that I may just want to outsource this and use a mongolab type of service. I don't mind paying a premium for it, but don't want to get gouged either. Any recommendations on this? I'm also open to maybe a nice management tool that lets me deploy and scale out the cluster onto aws or google cloud
[18:54:01] <cheeser> mms.mongodb.com
[18:54:22] <StephenLynx> coderman, I have a hunch that a single query will probably be ran on a single thread
[18:54:45] <MacWinner> cheeser, i use mms right now.. does it allow you to deploy cluster members?
[18:55:00] <cheeser> sure
[18:55:08] <cheeser> if you use automation
[18:55:19] <MacWinner> oh.. interesting.. will check it out!
[18:58:14] <MacWinner> does mongoshell use spidermonkey or v8?
[18:58:40] <cheeser> spidermonkey, iirc, though the kernel team is playing with moving to v8
[18:59:21] <akoustik> cheeser, MacWinner: might be the other way around. http://stackoverflow.com/questions/8331099/what-is-the-javascript-engine-that-runs-mongodb-shell *shrug*
[19:00:45] <MacWinner> akoustik, looks like v8 became default in 2.4.. http://docs.mongodb.org/manual/core/server-side-javascript/
[19:02:08] <akoustik> MacWinner: indeed...
[19:02:16] <akoustik> hm.
[19:10:07] <akoustik> anyone familiar with running a replica set with SSL? i think what i want is 1) all communication between replica set members should be via SSL, and 2) client connections should be non-SSL. looks like all i need is to run with net.ssl.mode == "preferSSL" and provide my key files. am i missing anything obvious?
[19:38:18] <amitprakash> Hi, I have a collection with documents of the format {'loc': <string>, 'date': <ISODate>, 'in': <boolean>}
[19:39:05] <amitprakash> I want to calculate (sum(date) where in = 0) - (sum(date) where in = 1) grouped by loc
[19:39:15] <amitprakash> What would be the best way to go about this?
[19:41:21] <danijoo> find all items where in == 0, iterate over them and sum maybe
[19:42:27] <amitprakash> danijoo, can't be done with aggregations?
[19:42:41] <danijoo> maybe. never used them :/
[19:42:59] <amitprakash> mr can do this but mr is quite slow
[19:49:41] <MacWinner> when using copyDatabase, can you copy from a secondary?
[19:50:00] <Havalx> argument syntax mongodb - http://pastebin.com/7JehaMWU
[19:53:51] <amitprakash> so I can use $add in mongo aggregation with date and a number.. any way to add two dates?
[19:54:01] <amitprakash> or alterately, to convert date to timestamp ?
[19:58:12] <deathanchor> getTime() ?
[19:58:53] <EXetoC> > "hammer time"
[19:59:11] <deathanchor> http://bfy.tw/gM4
[20:00:30] <iszak> I am running some queries on mongodb in which it has roughly 90k documents / day and this seems to very slow even for a few days of data with a range (date) is mongo no good for querying?
[20:01:42] <iszak> I am running MongoDB 2.4 with decent hardware
[20:02:32] <akoustik> iszak: are you using indexes for collections you're reading from heavily?
[20:02:56] <iszak> akoustik: there are some indexes, but not everything is indexed, no
[20:03:37] <EXetoC> got any regex conditions?
[20:03:52] <EXetoC> got enough RAM?
[20:03:54] <iszak> No, it's just a range + $match with $ne null
[20:04:05] <iszak> EXetoC: stack driver shows RAM is not maxing so I assume so?
[20:04:29] <akoustik> well, you'd want to be indexing on the date field, so one thing would be to make sure that you are doing so.
[20:04:57] <iszak> yeah I will check that but in general everything I have heard is mongo is rubbish for querying
[20:05:10] <akoustik> heheh, sometimes true.
[20:05:27] <iszak> if I upgrade to 3.x or 2.6 will this help?
[20:07:16] <EXetoC> why is that? does it have to parse field names over and over for example?
[20:07:28] <iszak> EXetoC: sorry? I don't understand
[20:08:39] <akoustik> iszak: regarding upgrading - getting to 3.0 gains you access to pluggable storage engines. i've not messed with them, but it looks like WiredTiger for example might emphasize high performance queries.
[20:09:03] <joannac> iszak: pick one query, and run it with .explain(true) at the end
[20:09:32] <iszak> joannac: I can't do it atm, it was at work, but I will remember this for tomorrow.
[20:09:44] <joannac> from what you said, it sounds like you don't have the right indexes. Upgrading to 2.6 or 3.0 is not going to fix missing indexes
[20:10:30] <iszak> it sounds like it, but is mongo generally good for querying?
[20:10:37] <cheeser> of course
[20:11:08] <iszak> even large volumes of data?
[20:11:11] <joannac> yes
[20:11:26] <joannac> unless you're talking multi-TB
[20:11:47] <joannac> at which point, your disk setup is probably more important than the database you try to use
[20:11:49] <EXetoC> akoustik: does that mean you can use hashing rather than comparing key contents for example?
[20:12:37] <akoustik> EXetoC: no clue, tbh. don't know how things really work under the hood with any storage engine.
[20:12:48] <akoustik> :\
[20:13:55] <EXetoC> I should totally implement a kickass storage engine in addition to a driver
[20:14:00] <EXetoC> my goal being fame and fortune
[20:14:19] <amitprakash> how do I update a field in all documents depending on values from another field in the same document?
[20:16:08] <iszak> EXetoC: you should implement pi-fs for mongo
[20:16:45] <EXetoC> obviously
[20:17:05] <Havalx> EXetoC: mhm.
[20:18:34] <Havalx> akoustik, akoustik, cheeser: http://pastebin.com/7JehaMWU
[20:19:42] <joannac> Havalx: is there a question in there?
[20:20:57] <joannac> do you actually have a connection to a database? Can you find the same document using the mongo shell?
[20:21:09] <Havalx> There is
[20:21:21] <Havalx> joannac: There is.
[20:21:35] <Havalx> joannac: Yes.
[20:21:59] <Havalx> !
[20:22:18] <cheeser> and the question is ... what?
[20:23:39] <joannac> I would like to see where you connect to the database, and also the same query in the mongo shell, and also a find() with no arguments in python
[20:24:07] <joannac> to prove that you do have a connection, the document exists, and you're connecting to the right server/db/collection in python
[20:25:10] <Havalx> akoustik, akoustik, cheeser: expression d does not return the expected result. {"x": 7} expressed as string. find_one({"x": 7} does return the document, but not find_one(d). A connection is active and the document has already been created.
[20:25:10] <cheeser> barring that, i'm guessing value type mismatches
[20:25:58] <joannac> does python default to string?
[20:26:53] <akoustik> Havalx: well... line 8 is looking for a property "d" of "db" - that property name doesn't have anything to do with the string you've assigned to d.
[20:28:21] <akoustik> am i misunderstanding something here...?
[20:29:32] <Havalx> akoustik: Thanks for nothing such, I've concerned with that. What is the proper way of forming a connection? I would not expect there to be a python problem as variable expansion didn't see off.
[20:31:40] <akoustik> Havalx: sorry buddy, haven't used the python driver. hopefully one of these folks can help. i gotta go - peace, all
[20:31:46] <Havalx> akoustik: I read a few chapters of documentation and online comments, and formed a client instance
[20:32:13] <Havalx> akoustik: I'll fix it. I'll let you know.
[20:36:11] <topwobble> How do you set the journal path in the mongo config file? I don't see an option for that (using the non-YAML config)
[20:37:54] <joannac> topwobble: ? don't think there's ever been a way to set a different path for the journal
[20:38:55] <topwobble> @joannac hmm, Where does it default to then?
[20:41:48] <joannac> dbpath/journal
[20:56:00] <topwobble> @joannac ok thanks. thats where I want it anyways ;)
[21:14:05] <ToMiles> to follow u on earlier message about ever expanding set of preallocating data files (>20x orginal db size) during database repair -> mongod --repair or db.DatabaseRepair() same problem or is that to be expected?
[21:25:51] <daidoji> ToMiles: expected
[21:25:57] <daidoji> at least according to the docs
[21:26:07] <daidoji> I was weirded out the same way the first time I did a repair on a large DB
[21:26:21] <daidoji> oh wait...
[21:26:27] <daidoji> I don't know about 20x original size though...
[21:34:24] <ToMiles> daidoji: I knew it need soem space littel more then double, but my db consists of 7 datafiles taking up about 10GB total, but the _tmp folder has >100 fiel sof 2GB each and it just keeps to endlessly increasing ( prob untill I'll have ot kill it again to avoid full disk)
[21:35:30] <joannac> ToMiles: that's not normal. can you pastebin a ls -laR dbpath ?
[21:40:10] <ToMiles> http://pastebin.com/m6C3iJKe
[21:42:14] <ToMiles> the pastebin is what is running now, earlier I had to cancel the repair when it was filling up the disk, it had files up to ceilometer.133 and more in /var/lib/mongodb/_tmp_repairDatabase_0
[21:43:03] <ToMiles> if I cant get it workign I'll prob just clear the collection and deal with a 2 weeks loss of metrics for my OpenStack cloud
[21:43:13] <joannac> ToMiles: what do the logs say?
[21:44:15] <joannac> also, what version?
[21:45:44] <ToMiles> http://pastebin.com/3ZVB8qLu
[21:46:14] <ToMiles> just keeps printen new datafiles being created nothing much else
[21:46:56] <joannac> ToMiles: further back than that - to the start of the repair if possible
[21:47:01] <ToMiles> on version db version v2.6.9
[21:48:52] <ToMiles> http://pastebin.com/7GL8yyZQ
[21:49:06] <joannac> also, maybe increase log verbosity for like, 2 mins? just to see what's actually happeneing
[21:49:31] <joannac> definitely weird though. are you reclaiming space or is there corruption in the original?
[21:51:12] <ToMiles> I suspected possible corruption since we had a RAID card failure, that has now been replaced, I wanted to make sure
[21:52:19] <joannac> oh. was this upgrade from 2.4?
[21:58:56] <ToMiles> no clean install
[22:06:15] <ToMiles> http://pastebin.com/kW4pDmqd
[22:07:21] <ToMiles> just that repeating for more and more data files cant see anythign out of the ordinary
[22:08:42] <joannac> cheers
[22:08:46] <joannac> yeah, kill your repair
[22:09:42] <ToMiles> yeah just did, 5th try wasn't gonna be the charm :-)
[22:09:53] <joannac> at this point, i think there's a loop in your extents list
[22:10:35] <ToMiles> where does the extents file get generated from?
[22:10:36] <joannac> still thinking... (i'm on vacation, i've paged my mongodb knowledge out :))
[22:11:00] <joannac> it gets written as extents are used
[22:12:51] <ToMiles> thanks for your assistance, good thing is its not super important data, openstack metrics data, we can afford to loose it if no alternative, but still would be nice to figure out at least a general idea has been goign wronf
[22:16:56] <joannac> ToMiles: do you have a support subscription?
[22:18:21] <ToMiles> no dont have one
[22:19:49] <joannac> 'kay. my prediction is next steps would involve looking at the files and data structures directly
[22:20:17] <joannac> which is something you would need a support subscription for
[22:20:44] <ToMiles> just run mongo as Ceilometer DB backend as a part of the OpenStack toolset of our private cloud to collect metrics so nto sure at this juncture it worth further time and effort if a start from clean is an option
[22:25:52] <ToMiles> thanks for you help tho and I'll remember the option of getting that support contract if further down the road the data collected in MongoDB becomes a critical part of our production setup
[22:26:18] <ToMiles> joannac: enjoy that vacation
[22:26:50] <joannac> np
[23:13:20] <joelkelly> Hey guys ... question about populate. If the DB was built without the populate ref ... can I still provide a ref for the schema?