PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 19th of June, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:46:15] <aboosoyeed> hello i need urgent help om replicaset
[01:46:36] <aboosoyeed> i have 3 node setup
[01:46:47] <aboosoyeed> but my third node always crashes
[01:47:11] <aboosoyeed> log message is
[01:47:12] <aboosoyeed> Wed Jun 19 01:27:38.514 [rsSync] Socket recv() timeout 109.123.100.239:27017
[01:47:13] <aboosoyeed> Wed Jun 19 01:27:38.514 [rsSync] SocketException: remote: 109.123.100.239:27017 error: 9001 socket exception [3] server [109.123.100.239:27017]
[01:47:13] <aboosoyeed> Wed Jun 19 01:27:38.514 [rsSync] DBClientCursor::init call() failed
[01:47:13] <aboosoyeed> Wed Jun 19 01:27:38.524 [rsSync] replSet initial sync exception: 10276 DBClientBase::findN: transport error: twitnot.es:27017 ns: local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } 9 attempts remaining
[01:47:53] <aboosoyeed> help!!
[01:50:01] <aboosoyeed> need help with replica set
[03:35:51] <daslicht> hi
[03:36:25] <daslicht> what is the best way to save a date to mongodb?
[03:36:51] <daslicht> currently I am just saving my js date() which results into :
[03:36:58] <daslicht> "date" : ISODate("2013-06-18T16:32:48.506Z")
[03:37:33] <daslicht> but when i reload it from the db it looks something like :Tue Jun 18 2013 18:32:48 GMT+0200 (CEST)
[07:31:54] <[AD]Turbo> ciao all
[08:44:53] <idank> i'm migrating my data from a single mongodb to a sharded cluster with 3 physical servers. one collection has 43 million documents and i noticed a peculiar pattern in the write performance of that collection over the last 48 hours
[08:45:44] <idank> this graph shows the iowait of the servers involved: the blue line is an average across the sharded cluster, and the green one if the single mongodb instance that i'm reading from https://dl.dropboxusercontent.com/u/18261201/iowait.jpg
[08:46:53] <idank> it seems like as the sharded collection is being filled, the cluster is waiting more for io
[08:47:53] <idank> this is the code that does the copy: https://gist.github.com/idank/5812727
[08:48:12] <idank> any ideas what can cause this?
[08:48:20] <idank> (i have a guess)
[11:44:17] <remonvv> \o
[11:45:13] <Zelest> o/
[11:46:27] <shmoon> excuse me
[11:46:37] <shmoon> how to update record with multiple criterias ?
[11:46:45] <shmoon> basicaly where col = val AND vol2 = val2 in sql
[11:47:01] <shmoon> {col1: val1, col2: val2} as criteria in update() doesnt seem to work
[11:47:11] <Zelest> {foo: bar, bleh: moo}
[11:47:12] <Zelest> ?
[11:47:28] <Zelest> maybe I should've used your col1, val1 examples :P
[11:48:02] <shmoon> hm it works actually
[11:48:08] <shmoon> dunno why PHP wont work with it then :(
[11:48:37] <shmoon> would be same in php, $where = ['col1' => 'val1', 'col2' => 'val2'] no?
[11:48:54] <Zelest> mhm
[11:49:03] <Zelest> numerics?
[11:49:07] <Zelest> remember that 1 != '1'
[11:49:22] <Zelest> so you might need to do a cast or what not
[11:49:30] <shmoon> true these are numerics, but i casted thrme to (int)
[11:49:44] <Zelest> hmms.. should work :o
[11:50:12] <shmoon> i can show my code
[11:50:13] <shmoon> wait
[11:50:18] <shmoon> i am stuck for an hour now :/
[11:50:37] <Zelest> :/
[11:51:59] <shmoon> see if anyone can help http://pastie.org/private/vm64h8gd9hq79lc1yotk2q
[11:53:24] <shmoon> no idea watthe heck is up :|
[11:53:36] <Zelest> how does the data look like in mongo?
[11:53:41] <Zelest> the doc you want to update that is
[11:54:26] <shmoon> after the update http://pastie.org/private/d2nowzpzysbptzssiqujg
[11:54:42] <shmoon> one of the head 3 should have become val10000
[11:55:02] <shmoon> i $set foo to 'socks' but nope
[11:55:05] <shmoon> doesnt workmeither
[11:55:36] <Zelest> what happens if you run a findOne on $where ?
[11:55:44] <shmoon> sec
[11:56:21] <Zelest> also, are you switching database in connection(); ?
[11:56:29] <Zelest> so you're not fooling around with the wrong db? :)
[11:57:33] <shmoon> not really, findOne is NULL
[11:57:42] <Zelest> hmms
[11:57:46] <Zelest> again, correct db? :)
[11:58:37] <shmoon> from a mongo colleciton how do i var_dump the DB selected any ideA?
[11:59:00] <Zelest> var_dump?
[11:59:03] <Zelest> like, check what db you're in?
[11:59:09] <shmoon> yeah
[11:59:11] <Zelest> db
[11:59:12] <Zelest> :P
[11:59:24] <shmoon> you mean $coll->db ?
[11:59:37] <Zelest> ooh
[11:59:39] <Zelest> in php
[11:59:41] <Zelest> *slow*
[11:59:52] <Zelest> var_dump on $mongo
[12:00:05] <Zelest> not sure tbh
[12:00:19] <shmoon> ok np
[12:00:21] <Zelest> no idea what LMongo is
[12:00:26] <shmoon> actually maybe this laravel mongo adapter is broken
[12:00:32] <shmoon> i should probably switch to native php driver
[12:00:33] <shmoon> ugh
[12:00:48] <shmoon> laravel is a php framework and lmongo is a mongo library for laravel
[12:02:08] <Zelest> peclah
[12:02:10] <Zelest> oh
[12:02:11] <Zelest> ah*
[12:02:35] <Zelest> but yeah, if you havn't switched db, it's probably using "test"
[13:41:09] <spuz> hello, is it possible to query a collection that looks like this: {"Bob":23, "Bill":24, "Jill":25} to return just elements beginning with "B" or ending in "ill"?
[13:46:03] <spuz> or is the general advice to have known field names for your data?
[13:51:46] <redsand_> spuz: you'd need a map/reduce function
[13:51:53] <redsand_> i dont think aggregate functions can do that
[13:52:12] <spuz> redsand: ok I'll look into it
[13:52:34] <starfly> $regex
[13:52:54] <bartzy> Hey :)
[13:54:49] <bartzy> Derick: My weekly nag - On the PHP driver on 1.3+ , if I do $m = new MongoClient($server); and I keep the script open for a day. No timeout will occur? The connection will remain open ?
[13:54:55] <kali> spuz: it's recommended that the field names are keywords of your app, not actual data
[13:55:11] <bartzy> ah, kali, perhaps you know that piece about the PHP driver? :)
[13:55:15] <Derick> bartzy: the connection will remain open unless something closes it
[13:55:21] <kali> spuz: so you should probably refactor this as [{ name: "Bill", value: 23 }, ...]
[13:55:25] <Derick> either a replicaset failover or network interruptions
[13:55:30] <Derick> bartzy: please do upgrade to 1.4.1 though
[13:55:39] <bartzy> Derick: Something = ? besides the obvious network stuff
[13:55:58] <kali> bartzy: i have stop using php in the previous millenary :)
[13:56:13] <bartzy> And if the connection is closed - The driver will try to reconnect or will it throw a MongoConnection exception ?
[13:56:22] <bartzy> kali: :)
[13:56:24] <Derick> bartzy: the driver won't do it on its own
[13:56:34] <spuz> kali: I think I will do that
[13:56:42] <Derick> bartzy: the driver will throw an exception at least once, but will try to reconnect.
[13:56:54] <kali> spuz: yeah, i strongly recommend it
[13:57:14] <bartzy> Derick: Will or won't? :P
[13:57:26] <Derick> both "will"
[13:57:32] <Derick> sorry for the broken grammar
[13:57:53] <bartzy> So basically it will always try to reconnect?
[13:58:12] <Derick> yes, but you do need to catch the exception, and you will have to redo the operation that caused the exception
[13:58:14] <bartzy> What do you mean by throw an exception at least once? It will throw an exception but will reconnect ?
[13:58:28] <Derick> yes
[13:58:31] <bartzy> so I can get into a try-catch loop ?
[13:58:43] <Derick> bartzy: what do you mean?
[13:59:11] <bartzy> try { return $m->find(..) } catch (MongoConnectionException $e) { return $m->find(...); }
[13:59:45] <bartzy> That can be repeasted indefinitely cause the driver will fail, will try to reconnect, throw an exception, then fails again ?
[14:00:22] <Derick> actually, no. If you try to connect to a server that is really down, it will be put on a blacklist for 60 secs.
[14:00:47] <Derick> but yes, you can get into such a loop if things are really bad :)
[14:01:04] <Derick> but that probably means you've to manually interfere getting mongod to work properly again
[14:21:02] <bartzy> Derick: Thanks. sorry for the delay. So basically I can get into a loop, but with interval of 60 secs ?
[14:21:07] <bartzy> (which is fine by me)
[14:21:27] <Derick> well, you will need to do the sleep(60) yourself
[14:23:28] <bartzy> ah, lol :) so if I won't , I will have a crazy loop but the driver will try to reconnect only every 60 secs ?
[14:23:40] <bartzy> Derick: Also - why upgrade to 1.4.1? Bug fixes or something major ?
[14:24:15] <Derick> yes
[14:24:20] <Derick> must better replicaset handling
[14:24:32] <Derick> bartzy: and yes, the driver will only try every 60 seconds
[14:26:44] <bartzy> Derick: what has changed with replicaset handling? :D
[14:27:08] <bartzy> Derick: BTW, were you at the MongoDB Days conference in Tel Aviv? :)
[14:27:15] <idank> I posted a topic to mongodb-users like 10 hours ago and I still don't see it here https://groups.google.com/forum/?fromgroups#!forum/mongodb-user
[14:27:18] <idank> why is that?
[14:28:04] <Derick> bartzy: changed... quite a few things. Without going into much detail, we handle longer-down nodes better now. And do a lot better with replicaset member failover
[14:28:10] <Derick> I wasn't in Tel Aviv
[14:28:17] <Derick> idank: new users get moderated :-/
[14:28:19] <bartzy> Derick: Cool, thanks !
[14:28:39] <idank> Derick: and a moderator hasn't signed off on it yet?
[14:28:50] <Derick> idank: seems so - I can't help there unfortunately :-/
[14:28:59] <idank> okay
[14:29:58] <idank> what can explain poor write performance over time to a sharded cluster with a hashed shard key?
[14:30:13] <idank> as the collection grows, iowait increases on the shard servers
[14:34:06] <spuz> is there something similar to $elemMatch that will return more than one element from an array?
[14:59:34] <bartzy> Derick: Is MongoCollection::save() just sugar for an upsert ?
[15:00:19] <Derick> yeah
[15:04:31] <bartzy> :)
[15:17:32] <spuz> what is the mongo style for field names?
[15:17:59] <spuz> alllowercase, or all_lower_case_with_underscores, or camelCase?
[15:19:55] <kali> spuz: mongo does not care
[15:20:10] <spuz> kali: is there no conventional style then?
[15:20:37] <Derick> nope
[15:20:42] <kali> spuz: no more than in the relational world. most people will transpose the convention from their app framework / language
[15:20:59] <spuz> kali: ok thanks
[15:21:30] <kali> spuz: the only thing is... when you store lots of small values, keeping the names short can make a difference
[15:21:46] <spuz> kali: yeah I just found some documentation that
[15:22:14] <kali> spuz: in some parts of my dbs, i use one-letter or two-letter field names
[16:38:04] <jimbishop> is there any way to see where you are when you're running a db.copyDatabase() ?
[16:38:59] <jimbishop> i've got a 25G mongo db that i'm trying to copy, and the actual files finished copying almost 2 hours ago. the indexing seems to be going and going with no way to reference where i am in the process.
[16:42:24] <starfly> jimbishop: patience, Grasshopper
[16:44:22] <jimbishop> sure… but how long?
[16:44:42] <jimbishop> am i talking within the hour or many hours?
[16:45:13] <starfly> jimbishop: ah, if you have to ask… :) I have no idea how heavily indexed your collections are, if heavy, could be hours
[16:46:00] <jimbishop> i've just never run a databaseCopy before. is it basically the same thing as re-running all of the indexes?
[16:47:59] <starfly> jimbishop: I haven't looked at that code, but presumably doesn't involve reindexing, more a copy of the indexes
[16:50:50] <jimbishop> the data files finished copying 2 hours ago. and since then i'm getting "msg" : "index: (1/3) external sort Index: (1/3) External Sort Progress: 2129436/5125186 41%",
[16:50:50] <jimbishop> "progress" : {
[16:50:51] <jimbishop> "done" : 2129437,
[16:50:52] <jimbishop> "total" : 5125186
[16:50:53] <jimbishop> },
[16:50:54] <jimbishop> in a loop
[16:52:24] <starfly> jimbishop: looks like sort operation for reindexing
[16:52:46] <jimbishop> here's the full: http://puu.sh/3jnXR.png
[16:54:02] <jimbishop> so basically there's no way to get an actual progress indicator for where you are in the databaseCopy process?
[16:54:54] <jimbishop> because i'd rather not kill this process if i'm even 80% of the way done
[16:55:10] <jimbishop> but if i'm only 20% of the way done… i need to try something else
[16:55:18] <starfly> jimbishop: other than interpreting what that log is providing (45% complete at your snapshot), not that I know of
[16:56:47] <jimbishop> yeah that's a rolling 45% of the second of a 3 part process that i've seen turn over about 100x now
[19:32:23] <idank> I submitted a topic to mongodb-user earlier today, can anyone here approve it?
[20:20:55] <astropriate> hello friends
[20:21:22] <astropriate> Is the mongodb c++ driver thread safe? as in can I use the same connection from multiple threads?
[20:21:28] <astropriate> or must I create a new one for each
[20:23:09] <leifw> it's not thread safe
[20:23:15] <leifw> you should look at ScopedDbConnection
[20:40:29] <mustmodify> I'm manning the free Rails hotline... someone called asking for help with their server config. I've gotten them past some ruby and apache related issues and now I'm getting a mongo error. Hopefully someone can help me.
[20:40:36] <mustmodify> "Could not connect to a master node."
[20:40:41] <mustmodify> so presumably it isn't started?
[20:41:21] <wieshka> Is it possible to setup normal Replica Set on 2 mongo instances ?
[20:45:35] <mustmodify> so I started mongo and I'm getting this error: "Failed to connect to a master node at 127.0.0.1:27017" ... is that the default port?
[20:49:34] <jp-work> mustmodify: I think that's the default port
[20:49:44] <jp-work> that probably means your server failsd to start
[20:49:51] <jp-work> or is still starting
[20:50:01] <jp-work> it does some initialization the first time
[20:50:10] <jp-work> like preallocating files and so
[20:50:35] <jp-work> you need to check the log to see what happened / is happening
[21:01:29] <mustmodify> ok new question... once installed will Mongo automatically start up as a service when the machine is rebooted? Or do I need to do something?
[21:01:57] <mustmodify> Seems like it isn't starting up right now because there isn't enough drive space so they are resizing their machine... just wondering whether they should expect Mongo to work when it restarts.
[21:55:26] <alchimista_> how to change this to mongodb select * from pins where id in (select id from msgs where username='bob')
[21:56:38] <Derick> you don't
[21:56:46] <Derick> joins are not possible
[21:56:50] <Derick> you need to do two queries
[21:57:15] <Derick> or, perhaps better, redesign your schema so that you can do it in one query
[21:57:16] <alchimista_> I have a collection name msgs and if i do like this "db.msgs.find({username:'bob'},{id:true})" I have resutl that i want
[21:57:30] <alchimista_> so the first query is this: db.msgs.find({username:'bob'},{id:true})
[21:58:07] <alchimista_> now from this result how i have to do the 2nd query, I mean this part --> select * from pins where id in (...)
[21:59:57] <alchimista_> Derick: I think redesign of the schema in this case is not a good solution for me, but any idea how to query on another schema cosidering the result of the first query
[22:00:24] <Derick> alchimista_: you should always consider aligning your data schema with how you query, insert, update and delete things
[22:00:38] <Derick> I know it's a big shift from RDBMSes, but it is the noSQL qay
[22:00:40] <Derick> way*
[22:00:58] <Derick> you can still do an In query:
[22:01:18] <Derick> db.pins.find( { _id: { $in: [ 1, 4, 2, 55 ] } } );
[22:02:18] <alchimista_> so that query return only on document as a result or all the documents that match the list?!
[22:03:02] <Derick> it will return all the documents
[22:03:17] <Derick> but you really should store the username with the pins instead
[22:03:21] <Derick> so that you can do one query:
[22:03:32] <Derick> db.pins.find( { username: 'bob' } );
[22:03:55] <alchimista_> yeah, I like this idea
[22:03:58] <alchimista_> thank you
[22:18:28] <alchimista_> Derick: actually I am using mongoosejs with nodejs this is what i wrote: https://gist.github.com/anonymous/c4e8567a40125ece7d31 , it is null
[22:18:29] <alchimista_> any hint
[22:19:13] <Derick> sorry, can't help you with NodeJS stuff
[22:45:57] <harenson> alchimista_: alchimista_ maybe you could get better support with node stuff at #col.js
[23:54:10] <brycelane> I have a use case question: when storing objects that represent classes in say python or ruby, do people tend to make collections for each type of object, or attach an id tag when storing them and put them in the same collection?
[23:54:43] <Astral303> does mongos create a lock file like mongod when started via --fork?
[23:55:04] <Astral303> seems like the answer is no