[00:28:23] <dstorrs> hi all. I have a collection (video_raw_stats) of stats data for 10M different videos. There is an index (vtag, +time_harvested). Will that key be used if I do a query for "most recent 10 stats for (vtag), or do I need a separate index for (vtag, -time_harvested)
[07:37:31] <omid8bimo> hello. anyone here use 10gen MMS?
[07:40:47] <omid8bimo> in 10gen MMS, hostname are displayed in orange on the host page. according to MMS help, its due to startup err or low ulimit! how can i make sure low ulimit is not the case?
[07:47:20] <omid8bimo> wereHamster: how? first of all, i wanna make sure its not for start up warning. my "last ping" in hosts table are green. does this mean they are ok?
[10:12:09] <Guest55783> Hi all! I've some doubts on MongoDB...I've tried an intensive write (counter upsert) operation: at starting write performance are really high but after some time dramatically decrease. May be a problem with automatically generated ObjectId? Or with indexes? (I've 30 different collections, each one with 2 indexes)
[10:31:08] <Oddman> Guest55783, could be due to mongo optimizations noticing a pattern in inserts
[10:34:17] <Guest55783> Oddman, I'm thinking: until now I've specified upsert with criteria on interested columns (so in background an equality query is launched). Is it not better passing directly the objectId (manually built)?
[10:35:36] <Oddman> it appears you've gone far more indepth in regards to performance than I :)
[10:37:13] <igotux> what could be the reason for not showing a node in rs.status() .. while primary and arbiter shows the same
[10:40:51] <igotux> recently added the arbiter node.. and it shows up in primary when i do a rs.status().. but not seeing arbiter node in secondary though... if i do a rs.status()
[11:41:05] <Bilge> Is it a good idea to always use binary for IDs because it compressed better?
[11:41:28] <wereHamster> it is a good idea to keep the size of the index small.
[11:44:25] <wereHamster> if you have option A and B, look at their sizes, and pick the smaller one.
[11:55:31] <pnh7> Hello all, I'm new to mongodb and now I'm trying to retrieve 1000 random records from a collection of 1million records. How do I do that?
[11:56:50] <wereHamster> pnh7: generate a randon number between 0 and 1mil, then use offset+limit to get one
[11:58:11] <pnh7> Oddman: I couldn't find any solution online. I could only able to get random record using db.col.find().limit(-1).skip(randomNumber).next() , but not random no of records.
[11:58:51] <pnh7> wereHamster: okay. will try that out.
[11:59:14] <wereHamster> phira: repeat that command 1000 times. You get 1000 random records
[11:59:45] <pnh7> isn't there any better solution? i thought it's time consuming.
[13:01:31] <gyre007> what is the best and proven way to rotate mongo logs...I saw a bunch of articles but they all differ...
[13:28:44] <schlitzer|freihe> hey there. we are planning ob migration a 2.0 sharded cluster to 2.2. the upgrade itself seems pretty simple. but what if we need to rollback for "some reason" would the downgrade be as simple as the upgrade, just the other way around?
[13:29:02] <schlitzer|freihe> btw: we are not using authentication
[14:20:25] <dingens> i'm looking for information on whether something like $unwind in mongos aggregation framework is possible if you do not have array. What I want to do is to split a document into two documents
[14:21:15] <dingens> my original document contains the fields a, b, c, x, y, z, the first resulting document should contain a,b,c, the other document should contain the fields x,y,z.
[14:21:28] <dingens> is something like this possible with the aggregation framework?
[14:22:02] <NodeX> not to my knowledge but I've only recently played with it
[14:22:36] <NodeX> teh aggregation framework could do with a $process pipeline where one can do these things
[14:22:57] <NodeX> but most people use it serverside so your language should be able to deal with it
[14:26:00] <dingens> NodeX: do you mean $project instead of $process?
[16:33:30] <saml> can't even get it to work. cas bah
[16:34:54] <_m> saml: I'm not certain what you're looking for us to say. Do you require help setting up that driver? Are you asking for general advice in a use-case as to whether MongoDB fits?
[16:38:21] <dbe> Guys, for some reason, my local version of mongo allows me to use $add to concat strings in the $project part of an aggregation, but I'm getting an error saying $add doesn't support strings on the server
[16:42:12] <dbe> I use a string literal, "Unit ", and a variable which is a string
[16:42:40] <LoonaTick> Hi everybody. I have a cluster with 40 servers running mongos, connecting to 3 servers where mongod is running (3 shards, and 2 secondaries for each shard). I run gridfs. Somehow, sometimes mongos on one of the 40 client servers stops to serve a random document. It can still serve other documents, but it fails to serve this one until I restart the mongos. Anyone have experience with this, or know how to debug?
[16:45:13] <NodeX> LoonaTick: how big is the doc it stops to serve - or is it random docs ?
[16:45:24] <LoonaTick> saml: Thanks, it are even more actually. Webservers running nginx with gridfs in the module
[16:45:34] <LoonaTick> NodeX: It are quite small images, up to 100kb
[16:45:51] <LoonaTick> but randomly, it doesn't stop serving big images or small images
[16:46:03] <LoonaTick> and none of the images are bigger then 1MB or something
[16:46:54] <NodeX> LoonaTick : is it a random image
[16:47:09] <LoonaTick> Yes, it doesn't always stop serving the same image
[16:48:02] <LoonaTick> NodeX: I've set gridfs to restart every 30 minutes, and the problem still occurs, so I don't think it's a memory leak related
[16:48:19] <LoonaTick> I've set mongos to restart every 30 minutes**
[16:52:05] <devdazed_> say i have an object with a field that is an array of objects. how can I update a specific object within that array?
[16:52:09] <NodeX> sorry, I can't understand what you're telling me. Is it a random image or the same image every time (please answer "random" or "same"
[16:55:38] <NodeX> just see then we'll know what's going on
[17:38:51] <nickbr> Hi, I'm having problems with the Mongo PHP driver, connecting to a replica set. I'm intermittently experiencing connection exceptions: couldn't determine master. I've seen posts about, like this one: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/CPhQmpTW4GA
[17:39:24] <nickbr> However, I have tried closing the connection after performing a query, then performing the query again, and it still fails
[18:01:38] <nickbr> upgrading to 1.3.0beta 2 gives the error 'No candidate servers found'
[18:03:51] <nickbr> and that isn't intermittent any more, I get that error all the time.
[18:06:22] <nickbr> I've downgraded back, restarted, and it seems to be working stably now. It was working well before, but then suddenly it just begins throwing exceptions as above, and is only successful about 1/20 times
[18:06:59] <nickbr> If anyone could help, or direct me to somewhere else for help, it would be much appreciated.
[18:09:02] <kali> are you sure the replica set is happy ?
[18:10:16] <nickbr> it says so when i check via the command line
[18:45:36] <Almindor> is there a way to use update with $set and get the value from the document itself? like $set: { x: this.x + this.y } or something similar
[18:50:18] <Almindor> it's forEach on 130million rows for me then
[18:50:38] <wereHamster> Almindor: or compute x on the fly, when retrieving the document
[18:51:16] <MongoDBIdiot> or use a different database
[18:56:11] <Almindor> wereHamster: not possible I'm afraid, this was an import mistake (location imported as latlong instead longlat) but non-imported rows are fine
[18:56:31] <Almindor> other than an ugly hack like "if ( doc.imported ) { swaploc }"
[18:56:38] <Almindor> I'd rather wait a few days for it to forEach
[20:30:49] <kanzie> so, new to mongo and Im trying to wrap my head around how to go about common operations. I have to collections, one for locations and the other for objects. I want to get all objects that has criteria X grouped by location. Is this even possible without doing several complex loop-queries through something like php?
[20:32:43] <urbann> I'm new to mongoDB and I just don't get it to work!
[20:32:47] <kanzie> so location "office" has property x, y, z. And in collection "objects" both stapler and pencil has a reference to the "office" locationID. I want Office { x, y, z, object {stabler, pencil} } in return (really mock up code)
[20:33:11] <urbann> what I try to do is to connect to mongo using mongodb://localhost:27017 (something)
[20:33:24] <urbann> don¨t know what I am doing wrong
[20:33:59] <urbann> I have mac os x lion installed
[20:35:50] <urbann> Is there a step by step guide available somewhere?
[20:35:53] <kali> kanzie: this is a join. mongodb does not do joins
[20:36:49] <kali> kanzie: so basically you have two options. first is to perform this join application side, or re-design your schema
[20:37:19] <kali> urbann: have you started a server ?
[20:37:20] <kanzie> kali, sure you could think of it as a join. The alternative would be to query the objects-collection with $in and then get the location id and get all locations. then in php construct the right json-object.
[20:46:49] <kanzie> kali: thanks, just needed to hear that
[20:46:51] <urbann> kali: do you mean mongodb://localhost:28017 ??
[20:46:55] <kanzie> before I started banging my head
[20:47:06] <kali> urbann: na http://localhost:28017 but it is useless :)
[20:47:19] <kali> urbann: run mongo in the term and read the tutorial to get started
[20:47:36] <sambomartin> hi, with a compound index, e.g. db.products.ensureIndex( { "item": 1, "stock": 1, "type": 1 } ) if you did a find(($and: [{'item':'xxx'},{'type':123}]}) - would it use the index?
[20:48:10] <kali> sambomartin: maybe for the type bit, but you need to try it
[20:48:22] <urbann> kali: yes http://localhost:28017 give me a page with information about mongo
[20:48:30] <kali> sambomartin: find(...).explain() will tell you
[20:49:26] <sambomartin> ok thanks, i will try, i thought it might be able to predict
[20:49:26] <kali> urbann: i know. it just says that your mongo is alive and well. now go to the term, run mongo, and go through the tutorial. you won't get anything interesting with a browser
[20:49:28] <urbann> kali: I am also able to run commands and so on but what I want to do is connect to meteorJS
[20:52:03] <kali> sambomartin: are you trying to confuse the optimizer on purpose ? :)
[20:53:11] <urbann> kali: so I guess I should be able to go to mongodb://user:password@host:port/databasename' (something?) in my browser?
[20:53:34] <kali> urbann: nope. only with mongodb drivers
[20:53:58] <sambomartin> whats normal practice then, say i have a user query that sometimes includes "stock" - it's not my schema btw, but in principal. would you have three compound indexes? (item,stock), (item, type), (item,stock,type)?
[20:54:12] <sambomartin> or filter the results after you've fetched based on first index?
[20:54:43] <urbann> kali: you say I can not just add that string in my browser?
[20:54:48] <kali> sambomartin: depends on the frequency of the request and the cardinalities
[20:55:27] <kali> urbann: yes. your browser is a http client. mongodb speak the mongodb protocol, your browser is useless to speak to mongo
[20:56:28] <urbann> kali: I see! so how can I see if mongodb://user:password@host:port/databasename respond to something?
[20:57:24] <sambomartin> how do other deal with this scenario
[20:57:35] <kali> urbann: i think you're way over your head, here
[20:57:45] <girasquid> I'm having trouble doing a query based on a date, and I'm not sure what I'm getting wrong - here's the query: http://pastie.org/4893290 - what do I need to change to query on created_at being $gte than the date I pass in?
[20:57:48] <sambomartin> i guess one on type, stock and compound?
[20:58:33] <kali> sambomartin: it might be the index on type is useless, if you only have 500 different values here
[21:13:38] <_m> urbann: I can't help you install a node driver/application. You might want to check with the meteor or node people for instructions there.
[21:14:01] <_m> If you can access mongodb from the mongo command, then your instance is running. Try sample queries, etc there.
[21:16:22] <urbann> _m: I understand. At least I feel I got some hints that pointed me in the right direction, thanks
[21:37:48] <shiver> can anyone tell me if using ubuntu upstart job to shutdown a mongodb is safe? i see that if the server takes too long to shutdown it will be issued a SIGKILL. is this not a bad idea?
[21:54:18] <shiver> Bilge, isn't that the default response to a process that takes too long to stop for upstart?
[21:54:34] <shiver> if it takes longer than "kill seconds" it will be issued a SIGKILL
[21:54:49] <shiver> i thought a SIGKILL was a bad thing to do for mongo?
[21:55:38] <wereHamster> not if you have the journal enabled
[21:57:16] <shiver> wereHamster, is that guaranteed? i have had a corruption on a mongodb instance in the past (though i dont know if journal was enabled back then)
[21:57:28] <shiver> which is why I am being rather cautious
[22:13:08] <glasser> I'm having trouble using ReplicaSets with the mongo shell. I run: "mongo -u username -p password host1.mongolayer.com,host2.mongolayer.com,host3.mongolayer.com/mydatabase" (which is the same username/password/hosts that I successfully use with my node.js script)
[22:13:39] <glasser> but this gives me errors when i try to do writes ("unauthorized db:admin lock") which I think is because i'm connecting to a secondary
[22:13:50] <glasser> and in fact, db.isMaster() reveals that I am connecting to a secondary and tells me what the primary is
[22:14:04] <glasser> and if i then close the shell and reopen it specifying just the primary, i can do my updates
[22:14:25] <glasser> but surely that's not the Right Way... surely there's a way to specify my complete ReplicaSet and have it automatically do writes to the primary, right?
[22:20:39] <_m> glasser: Why not just connect to your primary and do writes? Your replicas will automatically sync.
[22:27:36] <_m> There's an example for your use-case there.
[22:27:54] <_m> Essentially, you'll need to add /?slaveOk=true
[22:28:18] <glasser> _m: So, that's what I do in my node app
[22:28:30] <glasser> but I don't know how to do that... oh, can i do that in the URL on the shell? gotcha.
[22:29:35] <_m> Yeah, unless I read the docs completely wrong. We have CNAME records which are automagically updated internally. Which means: mongo master.mydb.com works every time. =)
[22:29:54] <glasser> _m: Trying to just add "?slaveOk=true" to my URL at the shell has an auth error on connection
[22:36:20] <glasser> it has no access to the "admin" database
[22:37:20] <acidjazz> any1 got any mod_rewrite skills
[22:37:41] <_m> Yeah, but my guess is google can help you more quickly.
[22:38:15] <_m> glasser: I can't find anything specifically stating this is supported. Most people are saying, "Write a shell script to determine the master."
[22:38:31] <_m> glasser: I'll see what our sysadmin says on the topic.
[22:40:13] <_m> glasser: He says, "no, not without using the hackish thing I set up."
[22:40:26] <_m> Sorry if that's not supremely helpful.
[23:32:22] <shiver> "If a daemon does not shut down on receipt of this signal in a timely fashion, Upstart will send it the unblockable SIGKILL signal."
[23:41:27] <shiver> regardless, you're rather arrogant and condescending for your lack of knowledge in the subject you're trying to "help" with. thanks... i guess
[23:53:37] <zacharyp> somewhat sad really. I just joined this channel, after attempting to convince my coworker that IRC wasn't so bad. First thing I see was Bilge's WTF