[06:18:34] <CodePulsar> Is there a package of MongoDB C++ Driver in Debian ?
[06:55:18] <CodePulsar> : error: ‘getScopedDbConnection’ is not a member of ‘mongo::ScopedDbConnection’
[06:55:28] <CodePulsar> Why did they have to change the API in 26compat :(
[06:58:21] <CodePulsar> Where the heck is the definition of ScopedDbConnection https://github.com/mongodb/mongo-cxx-driver/search?q=ScopedDbConnection&ref=cmdform ?
[07:13:38] <jianing> i have a question about sharding cluster. Does the chunk really store the document in a sequential disk space.
[07:39:56] <aydintd> Hello all guys, i have 2 replica shards contains 1 primary and 2 secondary mongos in it. In one of them i've about 270GB data and trying to keep my replication up but my replicas' always going to break. There was a misconfiguration on primary mongo about opLog size (its only 700MB) but i can't change it because of production. But i want to do a health check more efficient, what u guys suggest?
[07:40:48] <joannac> why is your replica always going to break?
[07:44:40] <rspijker> also… why can’t you change the oplog size on production?
[07:44:53] <rspijker> You have a 3 member RS, you can just do rolling maintenance
[07:45:53] <rspijker> Finally, what does “But I want to do a health check more efficient” mean? Replication isn’t for health checks, nor do I see anything else in your question which refers to efficiency or the lack thereof.
[07:46:29] <aydintd> i'm newbie on mongodb actually. I can't change it because new datas coming continously and i dont have any backups because of broken replications. I can't restart primary node
[07:46:48] <rspijker> then let your secondaries catch up first and then do it...
[07:47:22] <aydintd> my primary's oplog was set 700mb before me, i changed my replicas oplogsize 12gb but it didnt wokr out.
[07:47:41] <aydintd> they can't catch the primary it's the main problem actually
[07:48:23] <aydintd> after i start the replication mongo, it takes 2 or 3 days on startup2 phase
[07:48:41] <rspijker> and that’s too long for your oplog to cover, sure
[07:49:17] <rspijker> can you simply copy over the data between pri and sec?
[07:49:44] <rspijker> as in, simply copy over the files
[07:50:12] <rspijker> 270GB should take less than an hour on gbit
[07:50:55] <aydintd> yeah you're right but it's not my company's mongo, they're belongs to one company who i work for
[07:51:10] <aydintd> and they dont allow me to touch their primary data unfortunately. :(
[07:51:52] <aydintd> i'm just looking for some clues what causes this situation.
[07:51:57] <rspijker> well, then you’re out of luck
[07:52:24] <rspijker> what causes the situation is probably that in the 2 days it needs to catch up, more than 700MB worth of oplog traffic is recorded on the primary
[07:52:32] <rspijker> so the secondary goes into a state where it is stale
[07:52:39] <rspijker> it’s farther behind than the oplog covers
[07:53:08] <aydintd> i also run nagios on mongo replicas and getting service check timed out notifications on Mongo Memory Usage
[08:07:17] <CodePulsar> wtf is going on here: http://paste.kde.org/pe7ajilcx/dpkeer ? at main:1468 I have scoped_ptr<ScopedDbConnection> dbConnection(new ScopedDbConnection("localhost"))
[12:23:13] <remonvv> Yeah, thanka Derick. Couldn't have done this without you.
[12:41:07] <asteve> can you store historically relational models like user in mongo; follow-up question: is that a smart idea?
[12:52:34] <remonvv> asteve: Can you be a bit more specific. You mean moving a relational user database to MongoDB?
[12:53:31] <asteve> well, I'm fairly with other nosql's and I've always historically put a "user" model in an rdbms; I'm working on a new project and I would like to try out mongo
[12:54:35] <asteve> or rephrase it to something that needs two way lookup, both on some key like email address and on id
[12:54:50] <kali> asteve: i know SQL database quite well, for having work with many of them for the best (or the worst) of 10 years. today there is no task or use case i've encountered for which i would not consider using mongodb first
[13:07:55] <asteve> if I create a document that is {name: "asteve", email: "asteve@asteve.asteve", password: "cleartext"}, can I find the record by any of those fields?
[13:08:12] <asteve> if so, is there a performance cost to some fields over the other?
[13:08:25] <asteve> I'm assuming accessing by the _id is the fastest?
[13:10:59] <kali> asteve: you need indexes on fields or fields combination you'll use for querying. just like in SQL
[13:11:18] <kali> asteve: there's a "free" index on _id
[13:12:41] <remonvv> It's pretty much just like a database
[13:21:52] <asteve> what is the purpose of the "1" in this example: db.people.ensureIndex( { "address.zipcode": 1 } )?
[13:36:23] <remonvv> asteve: It makes it valid JSON in that specific example, for compound indexes (e.g. {a:1, b:-1}) 1 or -1 can be used to specify the order relative to the earlier index field.
[13:37:01] <remonvv> You'll find some of that in the query language
[13:37:59] <remonvv> There are a number of operators that take any value without that value having any real significance
[13:38:24] <remonvv> e.g. $unset:{<field>:<doesn'tmatterwhat'sherereally>}
[13:38:41] <remonvv> There's arguments for and against that approach.
[13:39:08] <remonvv> (in that example it could have been {$unset:[<field>]})
[14:59:22] <cozby> hi, I'm inserting a lot of records from a nodejs app to mongo and after exactly 60 seconds I start getting timeout errors. Is there a config setting for this on the mongo side that I should be aware of ? Any help would be appreciated, I'm at my wits end on this.
[15:10:02] <remonvv> Can you show us the specific error? Inserting a lot of documents should not result in a timeout.
[15:13:18] <mikebronner> looking for a way to combine multiple aggregate queries into one resultset, so I don't have to make multiple calls to the database (to reduce overhead): http://stackoverflow.com/questions/25040792/return-limited-number-of-records-of-a-certain-type-but-unlimited-number-of-othe
[17:50:54] <doghugger> hey everyone, I'm having some issues with the ruby mongo driver. does anyone know if it supports exist? I've looked at http://docs.mongodb.org/manual/reference/operator/query/exists/ but not been able to implement it in ruby
[18:19:51] <defunctzombie> When using the $match piece of the aggregation pipeline, how would I specify that i want to match on documents with a created_at field less than some value?
[18:20:04] <defunctzombie> I have been trying '$lte' and then the date string
[18:20:10] <defunctzombie> but it doesn't seem to be working
[18:20:20] <defunctzombie> the same query does work in a regular .find()
[18:50:04] <sellout> Are there any multi-collection sample datasets for Mongo?
[19:23:07] <kali> mozzie: il faut tout faire à la main, et le moteur de fera rien pour toi
[19:23:26] <uf6667> mais une relation simple, c'est seulement une foreign key non?
[19:25:07] <kali> mozzie: oué, mais ca n'existe pas pour mongo. tu peux materialiser une liaison equivalente à une FK, mais mongo ne la validera pas (en verifiant que la destination existe) et ne te permettra pas de faire des requetes portant sur les deux collectiosn ene meme temps
[19:25:51] <uf6667> ah non, tu dois faire manuellement
[19:25:59] <uf6667> mais c'est pas trop difficile? :P
[20:50:48] <Es0teric> saml: ok let me explain the process i am using right now
[20:51:55] <Es0teric> first i have moloquent, which i call on the database seeder file i created… with guzzle, guzzle is a HTTP package that allows me to get api data… i then use that data to insert it into mongo using moloquent
[23:00:18] <klevison> for gelocation application based, mongoDB is the best choice? (5 millions requests per day)/600GB DATA (SQL Server)
[23:48:22] <Kettlecooked> Looking at Trello as an example, their unique ID's might look something like 'q7nkmEqZ' - much tidier than a MongoDB ObjectID. Where would I start if I'd like to incorporate unique ID's that look more like Trello style?
[23:50:29] <Kettlecooked> I suppose I just make the _id type a String, and for every new document I generate an ID and check that it's not already available? If available - generate new - repeat until satisfied?