PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 8th of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:21:28] <trietptm> Hello everybody, I'm using elasticsearch with mongodb. After I restart my computer and insert new documents to the collection, elasticsearch doesn't seem to index the new documents.
[06:23:08] <trietptm> I see some people tell to remove the river of elasticsearch but I don't know how?
[06:27:33] <SomeoneWeird> say I have an array of objectIds, is there a way to pull all of those objects from a collection in 1 query ?
[06:35:12] <trietptm> I follow these 2 links: http://satishgandham.com/2012/09/a-complete-guide-to-integrating-mongodb-with-elastic-search/ and http://www.elasticsearch.org/guide/reference/river/
[06:35:21] <trietptm> I have tried curl -XDELETE 'http://localhost:9200/_river/mongodb/_meta' and curl -XDELETE 'http://localhost:9200/_river/mongodb/'
[07:33:45] <solars> hey, can anyone suggest a monitoring tool for mongodb? (web based) - found mongobird, is it good?
[07:41:24] <[AD]Turbo> hi there
[09:07:24] <sinclair|net> [AD]Turbo: HAI!
[09:07:45] <meson10> I am using mgo. What is the best practice of database pooling ? Should I store and Session and return a Copy() every time I need a connection to the database ?
[09:14:31] <Nodex> the MongoClient handles pooling for you normaly
[09:53:57] <haqe17> sorry if this is a naive question, but how hard is it to communicate with a mongodb over a network?
[10:10:49] <rspijker> haqe17: about as hard as it is to communicate locally
[10:13:50] <haqe17> ok
[12:00:52] <Infin1ty> Anyone know how to upgrade the metadata correctly upgrading from 2.2.5 to 2.4.5 using config metadata since 1.x? there's no clear procedure for that
[12:14:01] <kobigurk> hi
[12:15:15] <kobigurk> we're running with revision 2.4.4 today in production, and I was interested to see what's new in 2.4.5
[12:15:36] <kobigurk> and couldn't find details anywhere
[12:15:45] <kobigurk> could you please help me find out?
[12:15:51] <ron> https://jira.mongodb.org/secure/IssueNavigator.jspa?reset=true&jqlQuery=project+%3D+SERVER+AND+fixVersion+%3D+%222.4.5%22+ORDER+BY+status+DESC%2C+priority+DESC
[12:16:18] <k610> is --collection broken in mongofiles ?
[12:16:23] <ron> if you go to the downloads page, there's a changelog link.
[12:18:54] <k610> how should i split my files in collections ?
[12:19:09] <kobigurk> thanks!
[12:29:58] <Maro> Hi, can anyone explain why you need to have a majority of voters in Site A in a geographically distributed replica set when you can set the priority of a member in Site A to be higher? I have 1 member in Site A, 1 in B, 1 in C and the priority of the members in B and C are < 1...
[12:35:55] <mstastny> Hi, I am testing disaster recovery of one lost config server in a sharded environment.
[12:35:55] <mstastny> 1. in a lab I have 3 config servers, single shard (either single machine or replset, does not make any difference).
[12:35:55] <mstastny> 2. now I turn off the first config server (to simulate the disaster)
[12:35:55] <mstastny> 3. according to the documentation (http://docs.mongodb.org/manual/tutorial/replace-config-server/), now I should:
[12:35:55] <mstastny> - deploy a new server with the same IP/DNS name, do not run mongod
[12:35:56] <mstastny> - disable the balancing, to process with further steps
[12:35:57] <mstastny> The problem is that I am not able to disable the balancing on mongos, I get the following error:
[12:35:57] <mstastny> mongos> sh.setBalancerState(false)
[12:35:58] <mstastny> SyncClusterConnection::udpate prepare failed: 9001 socket exception [6] server [config-1:27019] config-1:27019:{}
[12:35:59] <mstastny> config-1:27019 is the config server I've "lost", I am running MongoDB 2.4.5
[12:36:00] <mstastny> What am I doing wrong?
[12:45:45] <n06> hello all, can anyone help me debug an error with my config servers
[12:46:03] <n06> they seem to be out of sync and I'm not sure how to rectify that
[12:46:46] <n06> ERROR: config servers not in sync! config servers [redacted] differ#012chunks: "f1d513b6e60138d7f281d8fa5d635539"#011chunks: "86c49e48fed9d6bd8eaac8cc19b288f2"#012databases: "3cdb90eafc32b5fe9473d6db4500f9e6"#011databases: "3cdb90eafc32b5fe9473d6db4500f9e6"
[13:14:10] <n06> is there a good way to find out which of two config servers has the correct / up-to-date data?
[13:14:31] <n06> the out of sync issue happened over the weekend so i think they've both moved on
[13:17:35] <Zitter> hi, any tutorial on how to use mongodb to store curriculum vitae/resume? tia
[13:25:04] <rspijker> n06: have a look at the changelog collection on both and see which has the latest timestamp
[13:25:21] <n06> rspijker, thanks i'll check it out now
[13:25:29] <rspijker> n06: that one should be the one with the latest info. Than I think your best bet is to restore that data to the other nodes
[13:25:57] <n06> gotcha, so find the latest data, and rsync to the stale config server
[13:27:22] <rspijker> probably want to make sure nothing is happening in terms of chunk migration or anything when you are doing this… Otherwise it could very well destroy your routing
[13:28:03] <n06> yeah i took a look at the config server migration docs so I feel ok with that, i just need to find the right data
[13:28:14] <n06> it looks like both of these servers have the latest stuff
[13:28:24] <rspijker> then those two should be in sync
[13:28:45] <rspijker> either way, back up both just in case I am wrong ;)
[13:31:09] <n06> rspijker, its so weird though because i keep getting these sync errors in my log
[13:31:58] <rspijker> how many cfg servers do you have?
[13:32:02] <n06> 3
[13:33:09] <rspijker> ok, so why can't two be in sync and the 3rd one cause the errors?
[13:33:35] <n06> it can, but since this is production i would rather have no errors :p
[13:33:41] <rspijker> although cfg servers going out of sync should be a fairly rare thing
[13:34:00] <rspijker> as in, should only happen if a node goes down during chunk migration or something like that (afaik)
[13:34:09] <n06> yeah, and i also think its affecting performance
[13:34:22] <n06> because everything slowed way down over the weekend
[13:35:28] <rspijker> I'm not sure why that would be the case… Maybe something during routing, but I simply don't know enough about the routing routine to be sure
[13:35:50] <n06> thats ok, thanks very much for the help anyways, if i figure it out ill let you know
[13:36:51] <rspijker> cool
[14:39:59] <DestinyAwaits> Need Help?
[14:40:46] <DestinyAwaits> I have to start learning MongoDB from Java & Scala Perpective where do I start any book available that can help me get started.. ?
[14:49:29] <DestinyAwaits> anyone around?
[15:09:54] <harenson> DestinyAwaits: education.10gen.com
[15:10:12] <harenson> there is a curse: "MongoDB for Java developers"
[15:10:25] <DestinyAwaits> for Scala?
[15:10:30] <harenson> DestinyAwaits: https://education.10gen.com/courses/10gen/M101J/2013_July/about
[15:10:49] <DestinyAwaits> and will that code teaches from basis as I don't know how mongodb works
[15:10:50] <DestinyAwaits> :)
[15:11:19] <harenson> DestinyAwaits: yeah, from nothing to intermediate (I guess)
[15:11:45] <DestinyAwaits> ok that will do.. Thanks for that.. and is there something for Scala as well?
[15:12:18] <harenson> DestinyAwaits: there is the catalog => https://education.10gen.com/
[15:13:12] <DestinyAwaits> ah it has scala too my bad
[15:13:15] <DestinyAwaits> thanks harenson
[15:13:16] <DestinyAwaits> :)
[15:13:34] <harenson> DestinyAwaits: anyone
[15:14:29] <DestinyAwaits> starts july 29 harenson ?
[15:15:08] <harenson> DestinyAwaits: yes
[15:15:39] <DestinyAwaits> :( ok count days.. :(
[15:20:16] <harenson> DestinyAwaits: you should start here http://try.mongodb.org/ (imho)
[15:22:33] <harenson> DestinyAwaits: then, continue with this http://lmgtfy.com/?q=scala+%2B+mongodb , if you wish, of course ;)
[15:23:03] <kali> harenson: frankly, installing mongodb is so easy, i'm not sure try.m.o is still really relevant
[15:23:33] <harenson> kali: it could heps
[15:23:37] <harenson> it teaches the basics
[15:24:19] <harenson> helps*
[15:25:01] <kali> harenson: i've stop referring people to it, because people were asking questions about discrepancies betweens its old syntax and the documentation
[15:25:07] <kali> harenson: but maybe it helps.
[15:25:26] <harenson> kali: oh, about old syntax, you're right
[15:25:38] <harenson> kali: but, maybe it helps xD
[15:26:01] <DestinyAwaits> harenson: thanks.. I have never installed it and that too I'm on windows first let me see if I can install and make it run
[15:26:33] <harenson> DestinyAwaits: your welcome
[16:07:59] <andrefs> hey everyone
[16:08:34] <andrefs> i'm trying to do a mongorestore with a --filter
[16:08:45] <andrefs> trying to exclude all documents older than a given date
[16:08:53] <andrefs> i have a field DateTime
[16:10:10] <andrefs> i tryed mongorestore -d db /path/to/dump --filter '{ "DateTime" : { $gt : new ISODate("2012-06-01") } }'
[16:10:26] <andrefs> but I got assertion: 16619 code FailedToParse: FailedToParse: Expecting '}' or ',': offset:32
[16:10:31] <andrefs> can anyone help me?
[16:10:37] <kali> yeah.
[16:10:41] <kali> stay put
[16:10:48] <andrefs> kali: tnks
[16:11:56] <kali> andrefs: http://docs.mongodb.org/manual/reference/mongodb-extended-json/
[16:12:02] <kali> one of these variants should work
[16:12:29] <kali> andrefs: (i think jsonp, but it might be strict)
[16:13:19] <andrefs> kali: thanks, i'll try those
[16:24:11] <andrefs> kali: for future reference: mongorestore -d db /path/to/dump --filter '{ "DateTime" : { "$gt" : "2012-06-01" } }' # ;)
[16:26:20] <kali> andrefs: mmm it means you're storing it as a string
[16:27:05] <kali> andrefs: or it means i don't know what --filter does which is possible...
[16:27:22] <andrefs> kali: nop, you're right, this is still not working
[16:27:50] <andrefs> it stopped complaining but didn't restore anything
[16:27:55] <kali> :)
[16:28:06] <kali> have you tried the jsonp syntax variant ?
[16:29:19] <andrefs> kali: not yet, will try now
[16:32:33] <andrefs> kali: hmm, no --jsonp option in mongorestore
[16:32:52] <kali> you don't need the option, just look at the way to specify the date
[16:33:34] <kali> new Date(1373301166000)
[16:33:40] <kali> something like that, i guess
[16:33:58] <kali> new Date(1338508800000) for 2012-06-01
[16:36:30] <andrefs> kali: yup, it's working!
[16:36:39] <andrefs> kali: thanks! :)
[17:19:46] <awestwell> hey all
[17:20:10] <awestwell> Is it possible to do an insert in a eval and return the result as a dbobject from the eval?
[17:20:14] <awestwell> like so
[17:20:16] <awestwell> function() {
[17:20:16] <awestwell> var value = { 'value' : 'value 1', 'counter': 1};
[17:20:16] <awestwell> db.sample.insert(value);
[17:20:16] <awestwell> return value;
[17:20:16] <awestwell> }
[17:21:17] <awestwell> basically I want to have a method that runs on the server in an eval and be able to return the DBObject as a result from the eval
[18:16:11] <baconmania> Hey guys, quick question about denormalized schemas. If I have a collection where each document is structured like
[18:16:43] <baconmania> { a: { something, something, something}, b: .., c:...}
[18:16:56] <baconmania> and I wanted to get a list of all the "a" objects in the collection
[18:17:13] <baconmania> would I just do collection.find() and manually collect all the "a" objects into a single variable? Or is there a better way?
[18:24:50] <kali> baconmania: distinct, group, the agrgegation framework, or map/reduce
[18:25:15] <kali> baconmania: OR denormalisation
[18:26:43] <baconmania> kali: when you say denormalization, you mean have a separate collection just for the "a" objects, and then the original collection would just reference the new "a" collection?
[18:26:53] <kali> yes
[18:26:58] <baconmania> wouldn't that be normalization
[18:27:42] <baconmania> either way I see what you mean
[18:29:03] <perplexa> aggregation framework, check $unwind
[18:33:15] <kali> baconmania: what i was thinkg was: A is a scalar, having a collection with all A values and not touching your current collection is a denormalisation... now if A is a big thing and you're going to reference it, then yes, we'd better talk about re-normalisation :)
[18:33:41] <baconmania> right
[18:34:04] <baconmania> with regards to unwind, I'm a bit unsure whether it applies here
[18:34:06] <baconmania> to clarify,
[18:34:19] <baconmania> if I have a collection where the documents are
[18:34:20] <baconmania> { a:[1,2,3], ...}, { a: [4,5,6], ...}, ...
[18:34:30] <baconmania> I want to get the result
[18:34:37] <kali> $unwind: $a, then $group: { _id: "$a" } will gives you unique values of a
[18:34:38] <baconmania> a: [1,2,3,4,5,6]
[18:35:16] <baconmania> ahh okay
[18:35:27] <kali> within aggregation framework limits of course (it will break if there are two many A values)
[18:35:51] <baconmania> kali: in that case, what is the "best" way to do it for large numbers of A values?
[18:35:58] <baconmania> separate collection for A?
[18:36:13] <pngl> What is the database command underlying the shell method find()? findAndModify seems to only find/affect a single record.
[18:37:26] <kali> baconmania: map/reduce would still work... but be an order of magnitue slower than AF (itself obviously slower than the two-collection form)
[18:38:09] <kali> baconmania: it all comes down to what are the read/write ratio, the query frequencies and acceptable response time
[18:38:34] <baconmania> indeed
[18:38:43] <baconmania> thanks kali & perplexa! :)
[18:40:46] <kali> baconmania: also... if the collection is unsharded, "distinct" with an index on a might be the easiest way
[18:42:49] <baconmania> kali: oh wow yeah, distinct definitely looks to be the easiest
[18:42:59] <baconmania> kali: but I assume that it'll only return the values from a single shard?
[18:43:19] <kali> baconmania: it will rollover and die on a sharded collection :)
[18:43:30] <baconmania> kali: fun!
[18:53:32] <jcalvinowens> Hello all, quick question: If I do a regex search like "^.+words", will Mongo use an index on the field? It's a bit hard to tell if it is or not...
[18:54:09] <jcalvinowens> But once many users are hitting it, not having the index could be a big drag on performance
[19:52:42] <dchau> Hi, I work on a project using mongodb. I experience a strange behavior. A document containing a sub-document is inserted. Just after the insert, finding by Id works but finding with the exact match of the sub-document returns no result. This problem only occurs a few moment just after the insert.
[19:53:21] <dchau> when a execute the same request later, it works
[19:53:39] <dchau> so at runtime it fails and when debugging, it works...
[19:54:09] <dchau> any idea?
[20:23:38] <insel> what is the main-reason for switch from a sql-like db to mongodb.. my friend google gave me nothing (useful)
[20:33:08] <kali> insel: it's webscale
[21:47:55] <SpNg_> I'm trying to convert a standalone install to a replica set. when I do rs.status() in my mongo shell I get "errmsg" : "can't currently get local.system.replset config from self or any seed (EMPTYUNREACHABLE)". tailing the mongod.log I see getaddrinfo("ip-10-0-0-145") failed: Name or service not known. this is on AWS. any ideas what could be going on?
[21:50:34] <joshua> Can you connect from one server to the other?
[21:50:52] <joshua> Bring up a mongo shell on that server and connect to the other member of the replica set
[21:51:15] <joshua> Also, I think it is suggested to use hostnames and not IP addresses. It makes reconfiguration easier
[21:52:40] <SpNg_> joshua: i can connect to the mongo instance from another one of the servers on my network
[21:53:10] <SpNg_> I'm using a mongod.conf file to control everything.
[21:54:48] <joshua> Whats your config look like?
[21:54:58] <joshua> Can you resolve that hostname from the server?
[21:56:00] <SpNg_> I just found a fix
[21:56:01] <SpNg_> http://www.devthought.com/2012/09/18/fixing-mongodb-all-members-and-seeds-must-be-reachable-to-initiate-set/
[21:56:19] <SpNg_> looks like a loopback that needs to be put in place for AWS servers