PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 25th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:49:36] <jnewt> eph3meral, http://pastebin.com/L5uLfzyB
[00:50:55] <eph3meral> jnewt, cool, gimme a minute
[00:51:44] <eph3meral> jnewt, k, have you tested copying and pasting any of that?
[00:51:53] <eph3meral> I can't really just copy paste an obvious portion and start using the data
[00:52:18] <eph3meral> jnewt, remove those "..." parts and surround the relevant portions with db.somecollection.insert(); for example
[00:52:50] <eph3meral> jnewt, include a use statement at the top, make it *super* easy, otherwise I personally at least (and probably many others) don't have or care to take the time to do it ourselves for your benefit :)
[06:50:34] <omnidan> I have a quick question. I trying to get a list of objects from a collection by searching for the ID of the reference. the reference is stored in "assigned_to" and I tried {"assigned_to.id": "ID"}, {"assigned_to": {"id": "ID"}} and ObjectId("ID") instead of "ID", none of these queries worked
[08:26:28] <jihyun> Hi, I'm testing sharding with hashed key,
[08:27:15] <jihyun> I added existing mongod as a shard, enable sharding for exising collection (~1GB)
[08:27:52] <jihyun> but collection is not distributed into multiple shard, and all queries are directed into single mongod instance.
[08:29:00] <jihyun> Documentation says that chunk larger than 64MB will be split,
[08:29:15] <jihyun> it there any way to trigger balancing/spliting?
[08:29:52] <kali> jihyun: first, check out getShardDistribution() on the collection
[08:30:54] <jihyun> kali, http://pastebin.com/TVrzBXAt
[08:31:19] <kali> there is only one chunk
[08:31:46] <jihyun> how to split collection into multiple chunks?
[08:32:37] <kali> jihyun: show us a "sh.status()" please :)
[08:34:01] <jihyun> kali, http://pastebin.com/SqhwTcNn
[08:34:24] <jihyun> There are more collections, but all of them are in similar status (1 ~ 2 chunks)
[08:35:35] <kali> jihyun: which one are you struggling with ?
[08:36:32] <jihyun> there's a collection named 'user', which has >90% of total data,
[08:36:56] <jihyun> and the collection also remains as a single chunk.
[08:37:25] <jihyun> same state with "development.friend" collection
[08:39:01] <jihyun> Here's stats http://pastebin.com/v1iCQGtN
[08:40:06] <kali> can you check your balancer ? http://docs.mongodb.org/manual/tutorial/manage-sharded-cluster-balancer/#check-the-balancer-lock
[08:41:31] <jihyun> "why" : "doing balance round"
[08:41:59] <jihyun> state => 1, when => 2013-10-25T08:37:12.209Z
[08:42:50] <kali> i think state 1 means it's doing stuff
[08:43:57] <jihyun> Okay, maybe I should wait for a while. Thanks kali!
[08:43:58] <kali> try db.changelog.find({}).sort({ time: -1 })
[08:44:25] <jihyun> "socket exception [SEND_ERROR] for 10.17.14.50:27010
[08:44:43] <kali> irk
[08:45:36] <jihyun> Oh, that's a config log from 'development' database.
[08:47:39] <jihyun> from config database -> { "_id" : "gm-sdb1-2013-10-25T07:53:22-526a2372cf425fdbcd5c1cdc", "server" : "gm-sdb1", "clientAddr" : "10.17.14.52:49558", "time" : ISODate("2013-10-25T07:53:22.393Z"), "what" : "split", "ns" : "development.gamedata", "details" : { "before" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "lastmod" :
[08:47:39] <jihyun> Timestamp(1, 0), "lastmodEpoch" : ObjectId("000000000000000000000000") }, "left" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : NumberLong(0) }, "lastmod" : Timestamp(1, 1), "lastmodEpoch" : ObjectId("526a23713e04684b30d465d3") }, "right" : { "min" : { "_id" : NumberLong(0) }, "max" : { "_id" : { "$maxKey" : 1 } }, "lastmod" : Timestamp(1, 2),
[08:47:39] <jihyun> "lastmodEpoch" : ObjectId("526a23713e04684b30d465d3") } } }
[08:48:04] <jihyun> It seems balancer is running.
[08:49:14] <jihyun> Will balancer splits existing chunk? or does it only moves existing chunk to other shard?
[08:49:26] <kali> i'm not sure
[08:50:38] <kali> iirc, when you actually run the initial shard collection command, it blocks while the initial split happens
[08:50:54] <kali> then the balancer start moving stuff around
[09:19:43] <schovi> Hello around. I am fighting with Mongo on Debian. I installed it via apt-get install mongo-10gen
[09:19:52] <schovi> but when i want to start it. It fails on https://gist.github.com/schovi/7151919
[09:20:51] <schovi> Any idea, what is it, what it happens to me and what to do with it? :)
[09:23:10] <Derick> schovi: odd - perhaps file a bug at jira.mongodb.org instead
[09:24:08] <schovi> Derick: i will, but i hoped, somebody can help me quickly :)
[09:25:25] <Derick> google it then? :-)
[09:36:21] <joannac> .What's in your conf file schovi?
[09:36:57] <schovi> joannac: it is default conf which arrived via apt-get
[09:37:09] <schovi> but now i tried to downgrade to 2.2.6 and it works
[09:37:36] <joannac> hrmmm
[09:38:32] <joannac> WHat were you trying before? 2.4.7?
[09:41:26] <joannac> WHat does the mongod logs say?
[09:46:12] <_Heisenberg_> Will updates that have been journaled get rolled back in case a primary steps down without replicating that update before?
[09:48:51] <eph3meral> is there any kind of layer similar to elasticsearch where I can provide "boosts" etc to various things so that I get a semi mixed bag of results
[09:49:29] <eph3meral> e.g. I should preferentially see higher paid listings first, but only to a point, after about say 25 miles or so I don't care about paid listings and I would get more use out of listings closer to me
[09:49:46] <eph3meral> layer for mongodb*
[09:49:54] <eph3meral> layer/plugin/extension/built-in-feature, I dunno
[10:14:48] <Nodex> eph3meral : you can boost anythign you like
[10:15:07] <Nodex> http://wiki.apache.org/solr/SolrRelevancyFAQ <---
[10:15:21] <eph3meral> Nodex, with what syntax? this is mongodb not solr :)
[10:15:39] <Nodex> oops lol
[10:15:46] <Nodex> wrong chan, my bad
[10:16:01] <Nodex> so to answer your question : No
[10:16:19] <Nodex> you would have to apply a sort
[11:14:19] <omnidan> I have a quick question. I trying to get a list of objects from a collection by searching for the ID of the reference. the reference is stored in "assigned_to" and I tried {"assigned_to.id": "ID"}, {"assigned_to": {"id": "ID"}} and ObjectId("ID") instead of "ID", none of these queries worked
[11:16:29] <joannac> erm, show us the "assigned_to" field and the structure of your query? pastebin it pls
[11:20:04] <omnidan> joannac: http://pastebin.com/RGUn3YXC
[11:22:21] <joannac> Try $id?
[11:22:29] <joannac> I'm not really familiar with DBrefs
[11:23:27] <omnidan> joannac: $id gives me an error
[11:23:42] <omnidan> invalid operator
[11:29:30] <joannac> omnidan: http://pastebin.com/evHta0XU
[11:32:42] <joannac> If this isn't a one to many relationship, have you considered using manual references instead?
[11:32:51] <joannac> http://docs.mongodb.org/manual/reference/database-references/
[11:33:28] <omnidan> it is a one to many relationship and I'm using doctrine, joannac
[11:36:52] <joannac> one to many with multiple collections?
[11:36:58] <omnidan> joannac: interesting, I'm trying to get this via doctrine though and used .id in another ref and it worked
[11:38:00] <omnidan> $data->findBy(array("assigned_to.id" => $id)); doesn't work but it works in another collection
[11:40:44] <Nodex> omnidan: can you pastebin a typical document from "feedback" collection?
[11:45:28] <omnidan> Nodex: well I did
[11:45:35] <omnidan> Nodex: http://pastebin.com/RGUn3YXC
[13:51:25] <Neptu> hej Ive been hearing about a max of 100GB per collection over a shard?
[13:51:47] <Neptu> or people having problems
[13:51:51] <Neptu> is this correct?
[14:17:37] <Nodex> wouldn't be much of a shard if it can only take 100gb now would it
[14:17:41] <Nodex> check your sources
[14:19:04] <briefcaseofstuff> how can i add a query to this aggregate? (ie where itemtype:"something")... http://paste.laravel.com/11Ls
[14:26:34] <Nodex> $match ?
[14:28:20] <briefcaseofstuff> after group right?
[14:28:31] <briefcaseofstuff> i just saw that on a page, ill try it
[14:32:23] <briefcaseofstuff> http://paste.laravel.com/11LG still doesnt work, any ideas?
[14:36:45] <briefcaseofstuff> nvm i got it, i think its supposed to be an obj in the array passed to aggregate, not part of the first obj
[15:14:17] <Nodex> anyone ever sed Mac in a cloud type services? - i/e renting a mac desktop and connecting to it remotely?
[15:18:59] <briefcaseofstuff> how do i get json back not bson with mongojs?
[15:42:32] <Xzyx987X> okay, this is bizzarre... why am I getting the error "Can't take a write lock while out of disk space" when my database is located on a disk with over a terrabyte of free space? and also, why is it that when I restart the program that's writing to the database the error goes away, at least for a while, until it starts happening again a few hours later?
[16:13:26] <cym3try> can I have the secondaries in the same server?
[16:14:57] <Nodex> yup
[16:25:36] <cym3try> thanks
[16:35:41] <dharmaturtle> Hi, I'm trying to follow the directions here http://stackoverflow.com/a/3605054/625919
[16:35:47] <dharmaturtle> but I'm getting "exception: 'out' has to be a string or an object"
[16:37:03] <kali> dharmaturtle: Aug 30 '10... that's ancient
[16:37:38] <kali> dharmaturtle: http://docs.mongodb.org/manual/reference/command/mapReduce/#dbcmd.mapReduce
[16:37:38] <dharmaturtle> oh. how should I modify it then? I'm unfamiliar with finalize... very new to all this.
[16:38:16] <kali> dharmaturtle: http://docs.mongodb.org/manual/reference/command/mapReduce/#mapreduce-out-cmd or that
[16:38:24] <dharmaturtle> so is it annoyed that there's no "out"?
[16:38:30] <kali> yeah
[16:47:20] <dharmaturtle> Is it possible to write to a collection from finalize or reduce? When I try it, I get TypeError: Cannot read property 'myCollection' of undefined near { db.myCollection.insert(value)' ",
[16:47:55] <dharmaturtle> and when I try to create to instanciate Mongo/db, I get: Mongo is not defined near 'conn = new Mongo();
[16:48:20] <dharmaturtle> (I'm running these from scripts, if that makes a difference)
[16:56:06] <kali> this is wrong. you must consider map, reduce and finalize as pure function, withou side effect
[16:56:40] <kali> it was kinda working until 2.2, but it was not intended
[16:57:05] <dharmaturtle> mm, okay thank you.
[16:57:34] <dharmaturtle> one of those few cases where a bug actually is a feature
[16:57:36] <dharmaturtle> :)
[16:58:16] <kali> well, it was well documented that it was not a good idea :)
[17:07:02] <hogepodge> ping Derick
[17:23:53] <t0th_-> hi
[17:23:59] <t0th_-> i am trying use mongo in shell
[17:24:15] <t0th_-> i have a "$err" : "unauthorized db:test ns:test.system.namespaces lock type:1 client:127.0.0.1",
[17:24:18] <t0th_-> how i can solve?
[18:36:46] <orngchkn> Does findAndModify in 2.4.7 still take a global write lock?
[18:37:25] <orngchkn> I'm currently crushing the crap out of our mongodb by using it with delayed::job (which uses findAndModify to atomically lock jobs in the queue)
[18:38:00] <orngchkn> Currently sitting at 100% lock in mongostat (with 277 connections)
[18:38:56] <orngchkn> I'm currently on 2.4.6
[19:45:42] <dweeb_> If I run mongod using the nojournal=true option. Does mongo still store the data in the filesystem or are all data lost if I quit the process?
[19:53:32] <dweeb_> anyone?
[19:53:34] <kali> if you terminate it nicely, data will be peramanent
[19:53:40] <dweeb_> but if I dont
[19:54:02] <kali> you'll need a repair, and data may suffer various forms of corruption
[19:56:10] <bjori> dweeb_: data is stored on the filesystem yes
[19:56:33] <bjori> dweeb_: but if mongod or the server crashes before it flushes the new write to the filesystem you may loose some commits
[19:56:34] <dweeb_> I accidentally removed data from my db. So I restored the server from a snapshot. When the server restarted mongodb couldnt start so I removed the mongodb.lock file and now all my databases are empty. I ran repair but all dbs are still empty.
[19:56:44] <bjori> up to 100ms worth of writes
[19:57:06] <bjori> what
[19:57:20] <bjori> swaagie: are you sure your snapshot works?
[19:57:22] <bjori> bleh
[19:57:28] <bjori> dweeb_: I mean ^^
[19:57:41] <dweeb_> bjori: no, but I hope so. What can be wrong with the snapshot you think?
[19:58:09] <bjori> roughly billion and forty things
[19:58:24] <dweeb_> bjori: the snapshot is taken by the server provider so I guess it works
[19:58:31] <bjori> dweeb_: shut down mongod, restore the data, check the filesize, startup mongod again
[19:58:49] <bjori> make sure you use the correct --dbpath
[19:58:59] <bjori> and if you were using dbperdir, that too
[19:59:04] <kali> and that files blong to the right user
[19:59:08] <kali> belong
[20:00:33] <dweeb_> bjori: So I restore my server from snapshot. Shut down mongod process. Remove the lock file. Run mongod with --repair option. And then start mongod again with right dbpath option and file permissions. is that all?
[20:01:17] <Derick> you need to shutdown mongod before you restore
[20:23:12] <orngchkn> Given this query (findAndModify) https://gist.github.com/contentfree/cdfa6b8c4e0c49e63b49 could anyone help me adjust the indexes so they, ya know, get used? This query is generated by a ruby gem called delayed_job_mongoid and I'd love to contribute a fix back to their project, if possible.
[20:23:30] <orngchkn> (Sorry that GH put the files out of order)
[21:02:59] <ygnk> Would anyone know how to write pig schema to load array of ints frmo mongo for mongo-hadoop?
[21:03:12] <ygnk> i have tried bag{int}
[21:03:16] <ygnk> and bag{chararray}
[22:50:48] <eldub> in a 3 node replica set... does 1 node NEED to be arbiter? Or will failover still be handled in a 3 node replica set
[23:02:24] <jyee> eldub: no, in a 3 node replset, they can all be regular nodes (1 primary, 2 seccondary).
[23:02:45] <jyee> in fact, that's probably best.
[23:04:00] <jyee> assuming you had 2 die, then you'd at least be able to read (just not write, since you wouldn't be able to get a voting majority).
[23:06:57] <eldub> awesome -- thank you
[23:07:33] <eldub> wait a sec... jyee if only 1 goes down... say the master... then there won't be a majority to vote and elect a new primary....?
[23:07:56] <eldub> s/master/primary/g
[23:13:05] <jyee> in a 3 node set, if primary goes down, you have 2 secondaries to vote and one will be elected primary
[23:13:57] <jyee> because the overall set is still 3, so 2/3rds majority if they agree… which they should, unless something odd happens
[23:14:31] <eldub> ahh
[23:14:38] <eldub> I was under the impression that once primary was lost
[23:14:49] <eldub> each of the 2 nodes say "me me" and then you have a split decision.
[23:15:36] <eldub> exit
[23:15:36] <eldub> exit
[23:15:37] <eldub> exit
[23:15:49] <jyee> heh.
[23:16:00] <jyee> third time's the charm?
[23:20:21] <a|3x> hi
[23:22:41] <a|3x> when i use db.copyDatabase() command on localhost (3 gb database), its seems to execute very slowly, it looks like it is copying the data quickly but reindexing
[23:23:20] <a|3x> any idea how i can copy indexes as well?