PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 1st of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:18:34] <CodePulsar> Is there a package of MongoDB C++ Driver in Debian ?
[06:55:18] <CodePulsar> : error: ‘getScopedDbConnection’ is not a member of ‘mongo::ScopedDbConnection’
[06:55:28] <CodePulsar> Why did they have to change the API in 26compat :(
[06:58:21] <CodePulsar> Where the heck is the definition of ScopedDbConnection https://github.com/mongodb/mongo-cxx-driver/search?q=ScopedDbConnection&ref=cmdform ?
[07:13:38] <jianing> i have a question about sharding cluster. Does the chunk really store the document in a sequential disk space.
[07:39:56] <aydintd> Hello all guys, i have 2 replica shards contains 1 primary and 2 secondary mongos in it. In one of them i've about 270GB data and trying to keep my replication up but my replicas' always going to break. There was a misconfiguration on primary mongo about opLog size (its only 700MB) but i can't change it because of production. But i want to do a health check more efficient, what u guys suggest?
[07:40:48] <joannac> why is your replica always going to break?
[07:43:50] <joannac> aydintd: ^^
[07:44:40] <rspijker> also… why can’t you change the oplog size on production?
[07:44:53] <rspijker> You have a 3 member RS, you can just do rolling maintenance
[07:45:53] <rspijker> Finally, what does “But I want to do a health check more efficient” mean? Replication isn’t for health checks, nor do I see anything else in your question which refers to efficiency or the lack thereof.
[07:46:29] <aydintd> i'm newbie on mongodb actually. I can't change it because new datas coming continously and i dont have any backups because of broken replications. I can't restart primary node
[07:46:48] <rspijker> then let your secondaries catch up first and then do it...
[07:47:22] <aydintd> my primary's oplog was set 700mb before me, i changed my replicas oplogsize 12gb but it didnt wokr out.
[07:47:41] <aydintd> they can't catch the primary it's the main problem actually
[07:48:23] <aydintd> after i start the replication mongo, it takes 2 or 3 days on startup2 phase
[07:48:41] <rspijker> and that’s too long for your oplog to cover, sure
[07:48:48] <aydintd> then i see vetoed syncs..
[07:48:51] <aydintd> on logs
[07:49:17] <rspijker> can you simply copy over the data between pri and sec?
[07:49:44] <rspijker> as in, simply copy over the files
[07:50:12] <rspijker> 270GB should take less than an hour on gbit
[07:50:55] <aydintd> yeah you're right but it's not my company's mongo, they're belongs to one company who i work for
[07:51:10] <aydintd> and they dont allow me to touch their primary data unfortunately. :(
[07:51:52] <aydintd> i'm just looking for some clues what causes this situation.
[07:51:57] <rspijker> well, then you’re out of luck
[07:52:24] <rspijker> what causes the situation is probably that in the 2 days it needs to catch up, more than 700MB worth of oplog traffic is recorded on the primary
[07:52:32] <rspijker> so the secondary goes into a state where it is stale
[07:52:39] <rspijker> it’s farther behind than the oplog covers
[07:53:08] <aydintd> i also run nagios on mongo replicas and getting service check timed out notifications on Mongo Memory Usage
[07:53:37] <aydintd> MongoDB Updates per Second
[07:53:49] <aydintd> and Mongo Lock Percentage
[07:54:32] <aydintd> and when i check sh.status() on replicas, it's really falling to timeout.
[07:55:39] <rspijker> sh.status() on replicas makes little sense
[07:55:44] <rspijker> since that is a mongos command
[07:56:06] <rspijker> nagios checks timeing out could have a bunch of reasons, config issues, host overloaded, network down, etc.
[07:57:31] <aydintd> i also see that, when my primary oplogDate shows today, my replicas showing 2 days ago when they broke.
[07:57:52] <aydintd> is there anything to check spesific thing to cause that problem except opLog?
[08:00:31] <joannac> nope, not enough oplog
[08:00:54] <joannac> what does db.printReplicationInfo() on your primary show?
[08:01:05] <joannac> pastebin it pls
[08:06:27] <aydintd> ok just a second joannac
[08:07:17] <CodePulsar> wtf is going on here: http://paste.kde.org/pe7ajilcx/dpkeer ? at main:1468 I have scoped_ptr<ScopedDbConnection> dbConnection(new ScopedDbConnection("localhost"))
[08:08:20] <aydintd> http://pastebin.com/CnA6dbv0
[08:13:03] <aydintd> joanna ^^
[08:21:58] <aydintd> it's says 17 hours, is it the needed time to cover opLog when it's set to 700MB isn't it?
[08:22:08] <ruairi> [A[A[A[A
[08:22:49] <ruairi> Oops
[10:37:11] <hyperboreean> Hey, can I use $pull with $set in the same update (on different keys, not the same ones) ?
[10:42:41] <Derick> hyperboreean: sure
[11:21:12] <remonvv> \o
[11:21:18] <Derick> hi hi
[11:21:35] <rspijker> hey man
[11:21:53] <remonvv> Good afternoon people of the internet. It is Friday. I like me some Friday.
[12:15:42] <Industrial> Hi.
[12:15:53] <Industrial> Say I have a recordings: {raw: {available: ]
[12:16:01] <Industrial> Say I have a recordings: {raw: {available: []}}
[12:16:14] <Industrial> How do I add an entry using an update query?
[12:16:19] <Industrial> tot he available array
[12:16:23] <kali> $push
[12:18:54] <remonvv> Specifically db.col.update({..}, {$push: {'raw.available' : <your new item>}})
[12:19:15] <remonvv> http://docs.mongodb.org/manual/reference/operator/update/push/
[12:19:30] <kali> and now i feel bad
[12:19:56] <Industrial> ah, right
[12:20:25] <remonvv> kali: As you should, perhaps you should go stand in a corner and reflect on your laziness.
[12:21:55] <hyperboreean> Derick: thanks!
[12:22:15] <Derick> ?
[12:23:13] <remonvv> Yeah, thanka Derick. Couldn't have done this without you.
[12:41:07] <asteve> can you store historically relational models like user in mongo; follow-up question: is that a smart idea?
[12:52:34] <remonvv> asteve: Can you be a bit more specific. You mean moving a relational user database to MongoDB?
[12:53:31] <asteve> well, I'm fairly with other nosql's and I've always historically put a "user" model in an rdbms; I'm working on a new project and I would like to try out mongo
[12:54:35] <asteve> or rephrase it to something that needs two way lookup, both on some key like email address and on id
[12:54:50] <kali> asteve: i know SQL database quite well, for having work with many of them for the best (or the worst) of 10 years. today there is no task or use case i've encountered for which i would not consider using mongodb first
[12:55:19] <asteve> that's helpful, thanks
[13:01:26] <remonvv> kali is a smart man
[13:05:59] <asteve> are all fields in mongo searchable?
[13:07:00] <kali> asteve: what do you mean ?
[13:07:55] <asteve> if I create a document that is {name: "asteve", email: "asteve@asteve.asteve", password: "cleartext"}, can I find the record by any of those fields?
[13:08:12] <asteve> if so, is there a performance cost to some fields over the other?
[13:08:25] <asteve> I'm assuming accessing by the _id is the fastest?
[13:10:59] <kali> asteve: you need indexes on fields or fields combination you'll use for querying. just like in SQL
[13:11:14] <asteve> ah, ok, thanks
[13:11:18] <kali> asteve: there's a "free" index on _id
[13:12:41] <remonvv> It's pretty much just like a database
[13:21:52] <asteve> what is the purpose of the "1" in this example: db.people.ensureIndex( { "address.zipcode": 1 } )?
[13:36:23] <remonvv> asteve: It makes it valid JSON in that specific example, for compound indexes (e.g. {a:1, b:-1}) 1 or -1 can be used to specify the order relative to the earlier index field.
[13:37:01] <remonvv> You'll find some of that in the query language
[13:37:59] <remonvv> There are a number of operators that take any value without that value having any real significance
[13:38:24] <remonvv> e.g. $unset:{<field>:<doesn'tmatterwhat'sherereally>}
[13:38:41] <remonvv> There's arguments for and against that approach.
[13:39:08] <remonvv> (in that example it could have been {$unset:[<field>]})
[14:59:22] <cozby> hi, I'm inserting a lot of records from a nodejs app to mongo and after exactly 60 seconds I start getting timeout errors. Is there a config setting for this on the mongo side that I should be aware of ? Any help would be appreciated, I'm at my wits end on this.
[15:10:02] <remonvv> Can you show us the specific error? Inserting a lot of documents should not result in a timeout.
[15:13:18] <mikebronner> looking for a way to combine multiple aggregate queries into one resultset, so I don't have to make multiple calls to the database (to reduce overhead): http://stackoverflow.com/questions/25040792/return-limited-number-of-records-of-a-certain-type-but-unlimited-number-of-othe
[17:50:54] <doghugger> hey everyone, I'm having some issues with the ruby mongo driver. does anyone know if it supports exist? I've looked at http://docs.mongodb.org/manual/reference/operator/query/exists/ but not been able to implement it in ruby
[18:19:51] <defunctzombie> When using the $match piece of the aggregation pipeline, how would I specify that i want to match on documents with a created_at field less than some value?
[18:20:04] <defunctzombie> I have been trying '$lte' and then the date string
[18:20:10] <defunctzombie> but it doesn't seem to be working
[18:20:20] <defunctzombie> the same query does work in a regular .find()
[18:50:04] <sellout> Are there any multi-collection sample datasets for Mongo?
[19:15:43] <mozzie> Hello world.
[19:16:07] <mozzie> who now a french channel to mongodb ?
[19:16:33] <mozzie> (my english is not good, and I don't know explain my prob in this language)
[19:19:02] <uf6667> mozzie: je parle un peu
[19:19:07] <uf6667> kk il y a?
[19:22:16] <uf6667> biensur :P
[19:23:07] <kali> mozzie: il faut tout faire à la main, et le moteur de fera rien pour toi
[19:23:26] <uf6667> mais une relation simple, c'est seulement une foreign key non?
[19:25:07] <kali> mozzie: oué, mais ca n'existe pas pour mongo. tu peux materialiser une liaison equivalente à une FK, mais mongo ne la validera pas (en verifiant que la destination existe) et ne te permettra pas de faire des requetes portant sur les deux collectiosn ene meme temps
[19:25:51] <uf6667> ah non, tu dois faire manuellement
[19:25:59] <uf6667> mais c'est pas trop difficile? :P
[19:31:01] <mozzie> hum
[19:32:11] <kali> en gros, oui
[19:33:43] <kali> bha dans ce cas, il te faut organiser la donnée pour que cela n'arrive pas, par exemple en embarquant une table dans une autre
[19:35:11] <mozzie> une collection dans une collection ?
[19:38:00] <mozzie> merci pour les info, je vais aller regarder un peu la doc et voir si c'est vraiment ce dont j'ai besoin pour mon projet.
[19:38:08] <mozzie> thx for all the fish ;)
[19:59:37] <rickitan> Guys does anyone know if there is an open bug for using the $addToSet operator after doing the query with $in ?
[19:59:59] <rickitan> Ex: Topic.update({_id: {$in: topicIdsToAddOrganization}}, {$addToSet: { organizations: organizationId }}
[20:00:01] <saml> what's the issue?
[20:00:17] <rickitan> it doesn't update all the records
[20:00:51] <saml> do you have example docs?
[20:01:07] <saml> let me try
[20:01:25] <rickitan> Thank you saml. I can create some for you
[20:02:28] <saml> rickitan, you probably need 3rd param: {multi:true}
[20:02:45] <Es0teric> question about splitting chunks of data in a cluster
[20:02:54] <rickitan> let me try that saml
[20:03:20] <Es0teric> i have 13k items from a json array that need to be split to allow for quicker insertion
[20:03:44] <saml> rickitan, https://gist.github.com/saml/c27f3c061a968a89aa3d
[20:04:33] <rickitan> thank you very much saml. I hand't used that property before.
[20:05:11] <saml> by default, .update() updates one doc.
[20:07:10] <Es0teric> anyone?
[20:08:40] <saml> Es0teric, how do you insert them?
[20:08:53] <saml> mongoimport?
[20:09:13] <saml> you can't load 13k element array into memory?
[20:10:23] <Es0teric> saml well i am using moloquent on the laravel php framework
[20:10:57] <saml> and something's slow?
[20:11:03] <Es0teric> saml: its very slow
[20:11:11] <saml> and chunking would fix slowness? how?
[20:11:32] <Es0teric> saml: i am just going by what i found on the web
[20:11:43] <Es0teric> if theres a better way im open to suggestions
[20:18:16] <Es0teric> saml: ?
[20:23:30] <saml> Es0teric, upload the json file? i can try to import
[20:23:43] <saml> do you need additional processing before importing to a collection?
[20:23:51] <saml> or just want to import the file as is?
[20:33:22] <Es0teric> saml: i just want to import the data as is
[20:33:37] <Es0teric> i might do additional processing later like adding extra keys/values to the arrays
[20:49:33] <Es0teric> saml: ?
[20:49:53] <saml> try mongoimport Es0teric
[20:50:01] <Es0teric> mongoimport?
[20:50:48] <Es0teric> saml: ok let me explain the process i am using right now
[20:51:55] <Es0teric> first i have moloquent, which i call on the database seeder file i created… with guzzle, guzzle is a HTTP package that allows me to get api data… i then use that data to insert it into mongo using moloquent
[23:00:18] <klevison> for gelocation application based, mongoDB is the best choice? (5 millions requests per day)/600GB DATA (SQL Server)
[23:48:22] <Kettlecooked> Looking at Trello as an example, their unique ID's might look something like 'q7nkmEqZ' - much tidier than a MongoDB ObjectID. Where would I start if I'd like to incorporate unique ID's that look more like Trello style?
[23:50:29] <Kettlecooked> I suppose I just make the _id type a String, and for every new document I generate an ID and check that it's not already available? If available - generate new - repeat until satisfied?