[00:36:25] <a|3x> i'm having difficulties with the c++ driver
[00:36:47] <a|3x> there appears to be some kind of concurrency issue
[00:43:43] <a|3x> i heave a bunch of threads running, each with their own ScopedDbConnection, but they still manage to step on each other (query returning results when there should be none)
[07:41:06] <xDHILEx> I just posted this on SO if anyone can lend a hand. http://stackoverflow.com/questions/31239617/twilio-account-verification-example-app-typeerror-on-server-restart
[09:44:55] <pamp> How can I connect throw .Net driver 2.0 into a mongodb database?
[09:45:33] <pamp> Im missing something in my connection string ? client = new MongoClient(@"mongodb://user:pwd@server:27018/admin")
[10:09:00] <CharlesFinley> Hi, why does this query return 0 results? db.getCollection('companies').find({_id: '559a4c7c18b5df6b032b353f'})
[10:09:54] <CharlesFinley> There is a record with that id
[10:11:36] <d-snp> CharlesFinley: does db['companies'.find({...}) work? of db.companies.find({..}) ?
[10:11:49] <d-snp> I've not seen db.getCollection before, though I suppose it should work
[10:11:58] <CharlesFinley> It does work - shows all companies.
[12:46:25] <pschichtel> Hi! I'm interested in implementing a paginated list with a mongodb. the collection will have a lot of elements, so I thought about implementing this using seek-pagination. In MySQL I'd use an auto incrementing index field for this, what could I use with mongodb?
[14:22:05] <pamp> its possible with C# driver 2.0 shard a collection ?
[14:49:21] <jubjub> If i do a query on {name: "john", age: 17}, does mongo perform the query on all docs and short circuit is the name isn't john? Do they check the query params in order?
[15:10:31] <Steve-W> Hi all. I'm using ruby to list collections in a Mongo 2.4 database, and it is only returning the first 101 items. Is there some way of enumerating them all?
[15:18:29] <jubjub> If i am querying to see if a point lies within a polygon which query is fast? $geoWithin or $geointersects?
[15:47:34] <na8flush> hey all, i'm not experienced enough with mongodb to know if this is a bad design pattern or not, so i'm hoping someone might be able to give me some feedback
[15:49:12] <na8flush> on our app, we store user details in an object called UserInfo. When that user authenticates, we generate a token for them, and that gets added as a property to the record. but here's the part i think might have negative performance - we save that object as an entirely new collection, UserInfoWithToken, duplicating all of the info from the UserInfo record
[16:34:08] <asturel> hi, after a while my mongodb server keep throwing 'DBClientCursor::init call() failed'
[16:36:23] <StephenLynx> so you don't have to worry about concurrency, you just reuse it until the application stops
[16:36:44] <symbol> Can someone give me an idea of what sort of issues I could run into with batch updates not being atomic? Does it have to do with interleaving?
[16:37:33] <StephenLynx> IMO, it could perform just some of the updates and then you wouldn't know which ones ran and which didn't?
[16:37:34] <asturel> i use nodejs mongodb native driver
[16:37:44] <StephenLynx> yeah, it gives you a connection pool.
[16:40:00] <StephenLynx> I have a module that holds the connection and open references to collections on boot.
[16:40:17] <StephenLynx> it ensures all indexes, gather all pointers to collections and then just serves these pointers.
[17:59:37] <ToMiles> Any reason a db.repairDatabase of a 10GB db would be allocationg and filling with zeroes more then 100 ( and counting) data files in the _tmp folder diuring this process?
[18:00:01] <ToMiles> this has already taken up > 200GB of disk space
[18:06:41] <ToMiles> so something has gone wrong then probably :-)
[18:07:14] <ToMiles> it's a ceilometer (OpenStack) database the repair is running on
[18:24:04] <danijoo> i deleted a ~ 70gb collection from a database. the server is pretty low on disk (~80% in use, all by mongo). I know that i cant reclain that 70gb without doing a database repair (what i cant do because not enough space on disk).
[18:24:45] <danijoo> will the other collections from the same db start to use that 70gb or are they really "dead space" now
[18:26:18] <StephenLynx> afaik, mongo will reuse the space.
[18:27:28] <danijoo> too bad that you cant repair without haven the doubled size left :/
[18:28:08] <cheeser> iirc, WT is better about that.
[18:30:35] <ToMiles> or in my case 20x the orginal size :-)
[18:31:04] <ToMiles> just killed before it filled up my whole disk with useless zero fille data files
[18:51:37] <coderman1> will mongo use multiple processors for a single query on the same node?
[18:53:41] <MacWinner> hi, i currently have a mongo replica set on a 3-node dedicated c;uister.. i'm thinking that I may just want to outsource this and use a mongolab type of service. I don't mind paying a premium for it, but don't want to get gouged either. Any recommendations on this? I'm also open to maybe a nice management tool that lets me deploy and scale out the cluster onto aws or google cloud
[18:55:19] <MacWinner> oh.. interesting.. will check it out!
[18:58:14] <MacWinner> does mongoshell use spidermonkey or v8?
[18:58:40] <cheeser> spidermonkey, iirc, though the kernel team is playing with moving to v8
[18:59:21] <akoustik> cheeser, MacWinner: might be the other way around. http://stackoverflow.com/questions/8331099/what-is-the-javascript-engine-that-runs-mongodb-shell *shrug*
[19:00:45] <MacWinner> akoustik, looks like v8 became default in 2.4.. http://docs.mongodb.org/manual/core/server-side-javascript/
[19:10:07] <akoustik> anyone familiar with running a replica set with SSL? i think what i want is 1) all communication between replica set members should be via SSL, and 2) client connections should be non-SSL. looks like all i need is to run with net.ssl.mode == "preferSSL" and provide my key files. am i missing anything obvious?
[19:38:18] <amitprakash> Hi, I have a collection with documents of the format {'loc': <string>, 'date': <ISODate>, 'in': <boolean>}
[19:39:05] <amitprakash> I want to calculate (sum(date) where in = 0) - (sum(date) where in = 1) grouped by loc
[19:39:15] <amitprakash> What would be the best way to go about this?
[19:41:21] <danijoo> find all items where in == 0, iterate over them and sum maybe
[19:42:27] <amitprakash> danijoo, can't be done with aggregations?
[20:00:30] <iszak> I am running some queries on mongodb in which it has roughly 90k documents / day and this seems to very slow even for a few days of data with a range (date) is mongo no good for querying?
[20:01:42] <iszak> I am running MongoDB 2.4 with decent hardware
[20:02:32] <akoustik> iszak: are you using indexes for collections you're reading from heavily?
[20:02:56] <iszak> akoustik: there are some indexes, but not everything is indexed, no
[20:05:27] <iszak> if I upgrade to 3.x or 2.6 will this help?
[20:07:16] <EXetoC> why is that? does it have to parse field names over and over for example?
[20:07:28] <iszak> EXetoC: sorry? I don't understand
[20:08:39] <akoustik> iszak: regarding upgrading - getting to 3.0 gains you access to pluggable storage engines. i've not messed with them, but it looks like WiredTiger for example might emphasize high performance queries.
[20:09:03] <joannac> iszak: pick one query, and run it with .explain(true) at the end
[20:09:32] <iszak> joannac: I can't do it atm, it was at work, but I will remember this for tomorrow.
[20:09:44] <joannac> from what you said, it sounds like you don't have the right indexes. Upgrading to 2.6 or 3.0 is not going to fix missing indexes
[20:10:30] <iszak> it sounds like it, but is mongo generally good for querying?
[20:22:18] <cheeser> and the question is ... what?
[20:23:39] <joannac> I would like to see where you connect to the database, and also the same query in the mongo shell, and also a find() with no arguments in python
[20:24:07] <joannac> to prove that you do have a connection, the document exists, and you're connecting to the right server/db/collection in python
[20:25:10] <Havalx> akoustik, akoustik, cheeser: expression d does not return the expected result. {"x": 7} expressed as string. find_one({"x": 7} does return the document, but not find_one(d). A connection is active and the document has already been created.
[20:25:10] <cheeser> barring that, i'm guessing value type mismatches
[20:25:58] <joannac> does python default to string?
[20:26:53] <akoustik> Havalx: well... line 8 is looking for a property "d" of "db" - that property name doesn't have anything to do with the string you've assigned to d.
[20:28:21] <akoustik> am i misunderstanding something here...?
[20:29:32] <Havalx> akoustik: Thanks for nothing such, I've concerned with that. What is the proper way of forming a connection? I would not expect there to be a python problem as variable expansion didn't see off.
[20:31:40] <akoustik> Havalx: sorry buddy, haven't used the python driver. hopefully one of these folks can help. i gotta go - peace, all
[20:31:46] <Havalx> akoustik: I read a few chapters of documentation and online comments, and formed a client instance
[20:32:13] <Havalx> akoustik: I'll fix it. I'll let you know.
[20:36:11] <topwobble> How do you set the journal path in the mongo config file? I don't see an option for that (using the non-YAML config)
[20:37:54] <joannac> topwobble: ? don't think there's ever been a way to set a different path for the journal
[20:38:55] <topwobble> @joannac hmm, Where does it default to then?
[20:56:00] <topwobble> @joannac ok thanks. thats where I want it anyways ;)
[21:14:05] <ToMiles> to follow u on earlier message about ever expanding set of preallocating data files (>20x orginal db size) during database repair -> mongod --repair or db.DatabaseRepair() same problem or is that to be expected?
[21:26:27] <daidoji> I don't know about 20x original size though...
[21:34:24] <ToMiles> daidoji: I knew it need soem space littel more then double, but my db consists of 7 datafiles taking up about 10GB total, but the _tmp folder has >100 fiel sof 2GB each and it just keeps to endlessly increasing ( prob untill I'll have ot kill it again to avoid full disk)
[21:35:30] <joannac> ToMiles: that's not normal. can you pastebin a ls -laR dbpath ?
[21:42:14] <ToMiles> the pastebin is what is running now, earlier I had to cancel the repair when it was filling up the disk, it had files up to ceilometer.133 and more in /var/lib/mongodb/_tmp_repairDatabase_0
[21:43:03] <ToMiles> if I cant get it workign I'll prob just clear the collection and deal with a 2 weeks loss of metrics for my OpenStack cloud
[21:43:13] <joannac> ToMiles: what do the logs say?
[22:09:42] <ToMiles> yeah just did, 5th try wasn't gonna be the charm :-)
[22:09:53] <joannac> at this point, i think there's a loop in your extents list
[22:10:35] <ToMiles> where does the extents file get generated from?
[22:10:36] <joannac> still thinking... (i'm on vacation, i've paged my mongodb knowledge out :))
[22:11:00] <joannac> it gets written as extents are used
[22:12:51] <ToMiles> thanks for your assistance, good thing is its not super important data, openstack metrics data, we can afford to loose it if no alternative, but still would be nice to figure out at least a general idea has been goign wronf
[22:16:56] <joannac> ToMiles: do you have a support subscription?
[22:19:49] <joannac> 'kay. my prediction is next steps would involve looking at the files and data structures directly
[22:20:17] <joannac> which is something you would need a support subscription for
[22:20:44] <ToMiles> just run mongo as Ceilometer DB backend as a part of the OpenStack toolset of our private cloud to collect metrics so nto sure at this juncture it worth further time and effort if a start from clean is an option
[22:25:52] <ToMiles> thanks for you help tho and I'll remember the option of getting that support contract if further down the road the data collected in MongoDB becomes a critical part of our production setup
[23:13:20] <joelkelly> Hey guys ... question about populate. If the DB was built without the populate ref ... can I still provide a ref for the schema?