PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 11th of March, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:08:39] <mjuszcza1> Hi all. I have two mongo servers and an arbitrator sitting in front of them. The active mongo server responds fine -- "show dbs" returns instantly. But if I run "show dbs" on the server that isn't the active host, nothing returns. This is a new setup to me (I inherited it), so I'm trying to figure out if this is a normal behavior or not. I assume, somehow, the active server is sending the inactive server updates, and perhaps while that happen
[04:01:44] <Dr{Who}> does a baked in $now var exist? i want to return docs that are less than N seconds old. dont trust dates on php side to match db so would be best to let the db use its own time.
[05:22:00] <timeturner> is there a list of things I should know in preparing for production?
[05:22:11] <timeturner> I'm sure there are many things I need to do
[05:22:21] <timeturner> but I don't know where to find the list
[08:37:53] <[AD]Turbo> hi there
[08:57:31] <mnml_> is mongodb slower on recent GridFS files ?
[08:57:48] <mnml_> When I try to access them i get a lot of latency / IO happening
[09:24:27] <null_route> Hey guys! how to I cancel a query in the mongo shell when, say, I forget a "
[09:24:28] <null_route> ?
[09:28:47] <null_route> http://stackoverflow.com/questions/10953034/how-do-i-abort-a-query-in-mongo-javascript-shell
[09:28:50] <null_route> you can't be serious?
[09:28:59] <null_route> ( and 3 enters?
[09:30:54] <null_route> hmmm... 2 enters (without the left-"(" ) seems to work, too
[09:33:42] <Nodex> lol
[09:43:21] <alex88> is there a way to sort data from a group() operation?
[09:43:47] <kali> alex88: try the aggregation framework
[09:44:04] <alex88> kali: eheh I've looked at that but seems harder to use
[09:44:19] <kali> alex88: not much, and it's way more efficient
[09:44:32] <kali> alex88: it doesn't use javascript
[09:46:49] <alex88> actually I'm using the group function this way http://pastie.org/private/u7xvhtykapb3isxm5gl3fw you think it can be ported to aggregation fw? it basically just groups docs by the name giving a [{ name="somename", metrics=[docs] }, ... ] result
[09:49:22] <kali> alex88: mmm yeah, i think this is quiee straightforward
[09:49:30] <kali> mmmmm parseInt
[09:49:35] <kali> alex88: ha, this is bad
[09:49:46] <alex88> kali: oh well, that's can be ported away no worries about that
[09:49:49] <alex88> *that
[09:49:57] <alex88> I'll do in request parsing and not there
[09:50:05] <alex88> forgot to remove :)
[09:50:34] <kali> alex88: ha yeah, right, it's the parameter
[09:50:54] <alex88> yup that's just for filtering
[09:51:24] <alex88> so with $match I can do the filtering, with $sort the sorting
[09:51:37] <kali> with $group the grouping
[09:52:21] <alex88> but in the docs it shows that it groups, but then it just uses sum avg to calculate aggregate values
[09:52:44] <alex88> maybe I should use $addToSet to include the whole document?
[09:53:02] <Nodex> that's called $project in the AF
[09:54:01] <alex88> Nodex: since I want a doc with name="name" I should do $project: { name: 1 } but to add a metrics field with all the documents?
[09:57:47] <fjay> why would my secondaries have more inserts going on then the master?
[09:58:02] <fjay> and have much higher lock time
[09:58:04] <fjay> (2.0 cluster)
[10:21:50] <Touillettes> Hi everyon i have old tables on my sharding is that possible to remove tables on the sharding ?
[10:31:51] <Touillettes> Hi everyon i have old tables on my sharding is that possible to remove tables on the sharding ?
[10:47:08] <omid8bimo> hello, we have a php web application (experimental social networking site) running mongodb. one issue we have is that, when we open several pages of the website in different browser tabs, images wont load. when i clocked like half of the tabs, the rest will start loading the images. there is no err on mongo log or apache log. any ideas?
[10:50:49] <Nodex> perhaps your application is causing it?
[10:52:26] <omid8bimo> Nodex: you mean it can be due to mongo php driver?
[10:52:47] <Nodex> no, I mean perhaps it's down to your application code
[10:55:31] <omid8bimo> Nodex: well since it need a huge debugging on application level, i wanted to make sure if there's any limitation/settings on mongodb side
[10:55:52] <omid8bimo> or any logs or something i can find a hint on what could be wrong
[10:59:07] <Nodex> I dont see how you arrive at the conclusion that Mongodb would be responsible for images not loading
[10:59:24] <Nodex> would be / possibly be
[11:03:18] <omid8bimo> Nodex: no i just wanted to make sure there is nothing on mongodb part for it. since i still have alot to learn about mongo.
[11:03:40] <omid8bimo> there is nothing on apache side as well that could cause this! so could it be php mongo driver?
[11:03:58] <Nodex> are you serving images through php?
[11:04:10] <Colonel-Rosa> morning people.
[11:04:36] <Colonel-Rosa> does anyone know if it's possible to paste a lot of insert lines into the mongo cli, ie https://gist.github.com/ThePixelDeveloper/2725a251f087dd070bfa
[11:05:05] <Colonel-Rosa> right now they concat in an odd manner
[11:06:06] <omid8bimo> Nodex: yes
[11:07:17] <Nodex> then that's probably your problem
[11:07:33] <Nodex> your PHP/Apache is probably choking under the load
[11:10:25] <omid8bimo> Nodex: what do you mean by load? if you mean Liunx server load or even apache's child requests, its almost nothing. since its still in testing and only like 10/20 people are conncted and opening pages.
[11:13:38] <Nodex> ok good luck
[12:04:08] <scoutz> hello
[12:05:19] <scoutz> i have a replica setup with 2 replicas and 1 arbiter, one of the replicas got completely wiped the other one took over as primary
[12:05:54] <scoutz> can i just start another replica and will it just take over the data from the primary?
[12:12:02] <fredix> hi
[12:12:31] <fredix> anyone know hown with ScopedDbConnection to connect to a replica set ?
[13:11:47] <nooga> hi
[13:12:39] <nooga> i've got an array of ObjectIds and i'd like to find all documents with ids from that array
[13:13:02] <nooga> will find({_id: [...]}) do what i want?
[13:17:00] <ron> NOoooooOOOOOga!
[13:17:16] <ron> nooga: well, you can easily TRY it and see ;)
[13:17:48] <nooga> trying with mongoose is harder than trying with cli ;>
[13:18:07] <nooga> maybe i'll try {_id: {$in: []}} first
[13:18:33] <ron> try with cli first. if it works there, it should work with whichever driver you use ;)
[13:19:05] <nooga> right
[13:19:10] <nooga> excuse me for being lazy
[13:19:22] <nooga> ;)
[13:20:03] <ron> right, excuse me for stating the obvious ;)
[13:39:30] <Nodex> Derick : ping
[13:44:24] <Derick> Nodex: pong
[13:44:57] <Nodex> a quick question about FPM and the php driver...
[13:45:09] <Nodex> I am doing a large import currently and my log is going mad with http://pastebin.com/KEBkv6AK
[13:45:15] <Nodex> going mad -> filling up
[13:45:42] <Nodex> is this normal - to have this many entries in the log when inserting?
[13:46:01] <Derick> no
[13:46:15] <Derick> are you closing the connection yourself for each item or something?
[13:46:21] <Nodex> no
[13:46:36] <Nodex> let me double check that
[13:46:42] <Derick> can you play around with mongolog otherwise?
[13:47:57] <Nodex> my mistake, an old version of my wrapper that still called close() ... sorry
[13:48:19] <Nodex> that should stop my database dying so much now !!!
[13:49:02] <Nodex> and now I actually get a few thousand inserts per second finaly haha
[13:52:33] <Derick> :D
[13:52:58] <Killerguy> hi
[13:53:04] <Killerguy> what is the max size of a shard key?
[13:56:01] <Derick> Killerguy: https://jira.mongodb.org/browse/SERVER-4271 is what I can find
[14:01:21] <Killerguy> thx Derick
[14:04:10] <nooga> okay
[14:04:26] <nooga> now I don't know and can't seem to find an answer
[14:05:37] <nooga> I've got a collection of documents as such {_id: ..., subdocs:[{a:1, b:2},{a:6, b:3}, ...]}
[14:06:46] <nooga> db.coll.findOne({_id: ..., "subdocs.a": 1}) and I want to get ONLY this thing in return {a:1, b:2}
[14:08:30] <nooga> is it even possible?
[14:10:40] <Nodex> if you mean all the "a's" in subdoc then no
[14:12:36] <nooga> maybe a more specific example, let's assume I've got such documents: {_id: ..., properties: [{name: ..., value: ...}, ...]}
[14:12:50] <nooga> i know the id of document and want to get it's property by name
[14:13:54] <nooga> so I say: get 749430's 'height' property and I get {name:'height', value:100}
[14:14:45] <nooga> still impossible?
[14:18:58] <Nodex> slice the fields ... "properties.name":1, "properties.value":1
[14:24:05] <nooga> Nodex: but this still gives me the whole array of subdocs
[14:24:10] <nooga> I just want one
[14:36:17] <remonvv> \o
[14:42:47] <Nodex> nooga, look at the positional operator
[14:48:36] <nooga> Nodex: I could find only examples for update
[14:53:42] <Nodex> you can return just one if you're using 2.2 with proeprties.$
[14:53:54] <Nodex> but you MUST match on the document yo want to return
[14:54:01] <Nodex> subdocument*
[14:55:18] <nooga> so findOne({_id: '...', 'properties.name':'something', {'properties.$':1}) ?
[14:56:10] <Nodex> yeh
[15:04:38] <shmoon> i'm stuck at doing something like
[15:05:09] <shmoon> select field1, field2 ... field5 from tbl group by field1
[15:05:38] <shmoon> db.coll.group() wont help, reading aggregation framework but apparently it requires me to have the fields in $group as well as $project sucks
[15:05:42] <shmoon> maybe map..reduce next :/
[15:05:50] <kali> shmoon: look at the aggregation framework
[15:05:52] <kali> Nodex: (2)
[15:06:23] <kali> shmoon: ha ! you already looked at it
[15:06:37] <kali> shmoon: well, you'd better use it than m/r anyway
[15:08:11] <shmoon> no see, i have task_id and message fields in $project but only task_id in $group, and the resultset only contains task_id fields not messages
[15:08:57] <kali> shmoon: use $addToSet on message
[15:13:28] <shmoon> $addToSet is used in $group
[15:20:35] <UForgotten> Is there a way to make a node that is a single-node replicaset back into a standalone and keep the data? I've only had one node in the set and it's rediculous to keep the replication data around if I'm not going to have redundancy.
[15:21:51] <kali> UForgotten: just discard the --replSet and restart the server
[15:22:14] <UForgotten> kali: that won't get rid of the rediculously large local collection though right?
[15:22:28] <kali> UForgotten: you can delete it
[15:23:27] <UForgotten> well I'm also moving to a new server, if I just omit copying the local files, it will work?
[15:23:34] <kali> yes
[15:24:30] <shmoon> kali: ok now i seem to understand whatever field i add in $project i need to add it as $first, $last or maybe $addToSet in $group, am i getting it right ?
[15:24:32] <Ephexeve_laptop> Anyone used MongoEngine here? Why is my datetime being saved like -> "date_crawled" : ISODate("2013-03-11T11:23:00.670Z") with a Z?
[15:24:57] <shmoon> but does that mean it does GROUP BY field1, field2, field3 all the ones mentioned in $group ?
[15:25:01] <kali> shmoon: mmm yeah, i think you got it right
[15:25:02] <shmoon> leading to slow query?
[15:25:23] <shmoon> or it just GROUP BY's the fields in _id ?
[15:25:27] <kali> shmoon: it groups on what's in the _id field of group, and aggregagtes the reset
[15:25:36] <shmoon> cool awesome :D
[15:25:44] <shmoon> HELL YEAH!
[15:25:58] <shmoon> btw, why did you say I'd be better off using aggregation f/w than m/r ?
[15:26:17] <shmoon> actually I've used m/r last year just forgotten, so i thought it'd be same for me to read either m/r or aggregation f/w
[15:26:19] <kali> shmoon: because m/r has very poor performance, and no concurrency at all
[15:26:21] <shmoon> i went with f/w
[15:26:25] <shmoon> oh i see
[15:26:42] <shmoon> by concurrency you mean multiple m/r processes cannot run at the same time?
[15:26:45] <kali> yes
[15:26:51] <shmoon> ah got it thanks man
[15:43:39] <shmoon> anyone uses mongoose ? how can i use populate with aggregate?
[15:45:46] <jtomasrl> i have this document https://gist.github.com/jtomasrl/5135141, how can i order by "date"?
[15:51:57] <Nodex> "orders.date":1
[15:52:56] <shmoon> hm seemsnot possible meh
[16:10:28] <evenflowz> hi, does anyone know how in php-mongodb i can ask from the database the value of password where all email=bla?
[16:11:35] <evenflowz> right now im doing this: $filter = array('email' => 'bla@bla.com'); $cursor = $collection->find($filter);
[18:29:06] <Almindor> hello
[18:29:46] <Almindor> I have a unique index on a collection but it seems inserts go through with same valued docs, isn't it enforced? (also what about updates to the field, are those supposed to be enforced?)
[18:37:40] <Almindor> hups, my fauld, typo in mongoosejs definition
[18:42:50] <gazarsgo> why do the docs for cloudformation+mongodb only make two nodes? dont you need 3 for a cluster?
[19:22:49] <mjuszcza1> What would cause a replica to be in a state of "3"
[19:22:55] <mjuszcza1> and not be responding to something simple like "show dbs"
[20:29:18] <jrdn> are there any disadvantages for making my document _id a hash?
[20:29:37] <jrdn> or an embedded document persay
[20:30:32] <jrdn> originally I was creating a string like, "2012-01-01_foo_bar=baz&haz=cat" as my id.. but doing this makes me have to have other fields just in case i want to do additional filtering...
[20:31:22] <jrdn> so is it bad to do: "_id": { date: "2012-01-01", name: "bar", "metadata": { haz: "cat" } }
[20:33:00] <preaction> jrdn, your document Id should not be dependent on any information inside the document. IDs are supposed to be an ID, not information
[20:33:22] <preaction> so, what you had was bad, what you're proposing is worse
[20:33:51] <jrdn> i'm performing upsets with $inc
[20:33:55] <gazarsgo> huh? how are you supposed to enable intelligent sharding if you dont' embed metadata in _id ?
[20:34:10] <falenn> I've created _id that is a composite based on naturally occuring data and then hashed for better shard balancing
[20:34:13] <gazarsgo> am i mixing up behavior from other nosql databses w/ mongos ?
[20:34:37] <gazarsgo> oh, so hashing is ok but using data directly is bad?
[20:34:37] <preaction> is that what has to be done to get sharding? i'm not sure i like that idea
[20:35:06] <jrdn> if you guys are all responding to me
[20:35:10] <jrdn> currently, there is no sharding
[20:35:15] <falenn> ok
[20:35:16] <jrdn> bu ti do need to perform upserts
[20:35:22] <jrdn> so a hash seems like the quickest way
[20:35:43] <falenn> you do want to make sure your _id is unique = no collisions
[20:36:46] <jrdn> yes.
[20:36:47] <falenn> at MongoDC, it was recommended that _id contain data that allows for direct document reference as it is a primary index, if you can ensure uniqueness
[20:37:48] <preaction> the problem with making your ID related to the data inside the document (by hashing data inside the document) is when the data inside the document changes, so (should/does) the ID. IDs should never change
[20:38:04] <falenn> ah, yes yes! I agree
[20:38:16] <preaction> but i'm going based on ancient database knowledge, not mongodb specific knowledge, so i dunno what mongodb should do
[20:38:23] <falenn> The hash should only be on a few stable elements that define the entity
[20:38:24] <jrdn> preaction, data won't change
[20:38:25] <jrdn> that's the point
[20:38:27] <falenn> that's what I meant
[20:38:40] <gazarsgo> you shouldn't hash mutable fields for use in _id, agreed
[20:38:47] <falenn> right
[20:39:47] <jrdn> currently, I'm doing, db.events.update({ _id: "2012-01-01_foo_bar=baz&haz=cat" }, { $inc: { "d.foo": 1, "d:bar" 2 }})
[20:40:53] <jrdn> unless i'm supposed to do, db.events.update({ metadata: { foo: bar, baz: blah }}, { $inc: { "d.foo": 1, "d:bar" 2 })
[20:40:57] <jrdn> then upsert that
[20:41:27] <jrdn> oops, i mean: db.events.update({ date: 2012-01-01, name: "foo", metadata: { foo: bar, baz: blah }}, { $inc: { "d.foo": 1, "d:bar" 2 })
[20:45:41] <jrdn> and i need optimal write performance
[20:54:56] <saml> is salat crap?
[20:55:30] <ron> maybe
[20:55:33] <ron> wtf is salat?
[21:06:35] <kali> ron: salat is an odm for scala and mongodb
[21:06:42] <kali> saml: i don't think so
[21:06:45] <ron> who cares about scala?
[21:06:54] <kali> ron: scala rocks :)
[21:07:53] <ron> well, if you mean it has no intellect like a rock, then we agree.
[21:13:56] <mjuszczak> Is it possible to see the optime in MongoDB 1.6? I'm trying to figure out how far the secondary is.
[21:14:19] <Derick> rs.status() ?
[21:14:33] <Derick> then I am not sure whether 1.6 tells you...
[21:14:36] <mjuszczak> Not showing it :(
[21:14:51] <mjuszczak> It just says it's in a state of "3"
[21:15:20] <mjuszczak> and ls -lt /datadir shows a last modified of February 22nd :-/ I'm not quite sure if there's another way to see what's up. There are definitely files being modified (and growing) inside the rollback directory.
[21:18:03] <mjuszczak> I'm trying to figure out if the server is too far behind and needs to be rebuilt
[22:00:17] <mnml_> some of my gridfs files don't load (the server does a lot of IO and at some point it expires) does anyone know why this can happen ?
[22:00:29] <mnml_> it happens very randomly :p
[22:18:10] <mnml_> is it possible to setup a timeout ?
[22:35:00] <ExxKA> If anyone has been using mongoose, how did you integrate it with your domain model? Did you build the domain model around the mongoose data model or did you fit the mongoose model to your domain? I am having a hard time doing the later (fitting mongoose as an after thought)
[22:42:43] <cmendes0101> I'm having an issue where cursor->count() always returns 0. Does anyone know a workaround for getting the total number of records in a cursor?
[22:54:28] <ExxKA> isn't cause you have to move it just one tick to get the daata?
[22:54:34] <ExxKA> cmendes0101
[22:55:03] <ExxKA> Cursors usually point to a placeholder untill they are moved to be initialized
[22:57:07] <cmendes0101> We have 2 systems, 1 dev and 1 prod and when running on dev cursor->count returns correctly but on the other system always returns 0. I don't believe thats the problem since its working on dev. This is in perl also if that changes anything
[23:43:30] <svm_invictvs> What's the best way to do mongodb backups?