[00:18:54] <bjori> it should be fine with replcasets, just start on the secondaries and execute stopdown on the primary before doing apt-get upgrade on the primary
[00:19:08] <bjori> but in a large sharded environment things become a littlebit more tricky :)
[07:58:01] <Kosch> Hm. I've got a replica set with a master and a slave. I stopped the master and rs.status() on the secondary node shows me, that it is recognized, that the master is gone, but secondary does not become primary. Inside log I see "replSet can't see a majority, will not try to elect self". So the 2ndary never becomes primary?
[07:58:35] <platonic> mmmnope. think it was a bit of a hack
[07:59:00] <platonic> doesn't directly relate to mongo (i.e. doesn't actually need a mongodb to install). but actually, i'll try to dig deep in those to see if i can't figure out what's going on
[07:59:33] <double_p> Kosch: you need an uneven amount of replSet members (at least 3) to vote. you can use "empty" mongod instances to fulfill that
[08:00:46] <Kosch> double_p: the arbiter stuff, isn't it?
[08:01:20] <platonic> think mongostat can at least help out a bit
[08:01:48] <platonic> anyone care to see if they can point me in the right direction with my issue? it's a really simple one, one that i'm sure you've never heard before.. 'my mongo is actually running insanely slow and i don't know why'
[08:04:21] <platonic> essentially, right now i'm hammering it as much as i can with updates (upserts).
[08:04:32] <platonic> and it's going between 8-13.
[08:04:47] <platonic> refresh rate is per second on that default mongostat so i presume that's actually the performance i'm getting
[08:05:00] <double_p> Kosch: i think it depends on the driver. but usually the replset members will be known to the client. if you leave some out in the configuration, you must ensure that the ones in the configuration are alive at client startup
[08:05:23] <Nodex> platonic : does it start off slow or gradually get slower?
[08:05:37] <double_p> like, if you add members to the replset while client is running, at some point (refresh whatever) they'll be contacted by the clients
[08:05:41] <platonic> i can wipe it and see what happens
[08:05:48] <platonic> right now it has 96k documents in it
[08:05:59] <platonic> i don't seem to remember it being /this/ slow in the beginning
[08:06:08] <platonic> one thing that concerns me though is that i don't see mongo actually taking any memory
[08:07:12] <Kosch> double_p: ic. What I thought. I use the NodeJS MongoDB driver... but after shutdown of the primary, the application fails. Can this be caused by not having a primary at the moment?
[08:07:13] <double_p> free memory w/ mongo? HOW did that happen? :D
[08:07:28] <platonic> is what the process is using..
[08:08:31] <platonic> ulimits for the pid if anyone's interested: http://pastie.org/private/mnrjmn8tqo9rtso3zpcniw
[08:09:27] <platonic> nodex: if i wipe it now and restart my insertion, any tips on how to look at it? can't use mms unfortunately (firewall issues for now.. :/)
[08:09:33] <Nodex> is there an index on the collection / query you're inserting?
[09:07:38] <ggherdov> HI again. Getting a bunch of errors while compiling the tutorial for C++ drivers. Here the code, the compilation command and the errors: http://bpaste.net/show/vXN0IXjthUUC64FUVDog/ what am I doing wrong ?
[09:08:08] <ggherdov> BTW the ubuntu package for C++ drivers is mongodb-dev
[09:09:57] <remonvv> Does anyone know where I can find a detailed spec about what happens and what guarantees are given during removeShard/addShard?
[09:49:56] <dsds__> Hi all! I'm using nodejs + mongodb (and the native driver). Should I create a new MongoClient for every request I receive in the server? Right now I'm using an unique instance of the MongoClient shared by every request to communicate with mongo, and it works quite well actually, but I'm used to MySQL where you must end connections as soon as possible, and something seems off...
[10:03:25] <remonvv> dsds__, I have no experience with node.js but the answer is almost certainly no. Never do heavy lifting on a per-request basis.
[10:07:37] <dsds__> remonvv, have you got experience with any mongo driver? I suspect that behavior must be alike in other drivers
[10:19:20] <Nodex> in the PHP driver the driver maintains the connection pool
[10:19:40] <Nodex> i/e you call connect and it either uses an emtpy connection or it makes a new one
[10:42:32] <dsds__> well, it seems that actually you only open the client once, and then reuse the db in each request (nodejs driver writer dixit), so I guess my approach is OK, thanks for the answers
[10:43:59] <remonvv> dsds__, yes, and in most other drivers it's a singleton, heavy-weight object
[10:56:31] <khushildep> Can any one help with this (http://pastebin.com/h3FaTDcP) error please?
[11:08:19] <remonvv> khushildep, what kind of help are you expecting? There's zero information in that paste other than that something's crashing.
[11:14:37] <Nodex> somehting looks like it's not working in that paste to me :P
[11:15:45] <Kosch> have you tried turn off and on again? *scnr*
[11:37:20] <dirk_> Hi, anyone here that could possibly help me out with a mapreduce issue on a sharded collection?
[11:41:52] <Nodex> better to just ask the question
[11:42:03] <khushildep> yeah - I'm basically trying to start and it's just dumping out - it's a really minimal config - just a database path and a log path.
[11:57:08] <dirk_> Ok here goes. I am doing a map reduce on a collection with out-reduce set to a large sharded collection. But it seems like the data never gets to the shards looking at postProcessCounts.
[11:57:41] <dirk_> If I do the same against a small collection with the same indexes and sharded as well postProcessCounts shows that it has written the new data.
[11:58:36] <dirk_> The only difference I can come up with is that one of the collections is small and the other is large.
[12:07:08] <dirk_> It somehow just doesn't get to the reduce state when the out: reduce options points to a large collection. I am at a loss as how to fix this.
[12:33:06] <ggherdov> Hi all, problems installing pymongo on Ubuntu. I can "import pymongo", but not "from pymongo import MongoClient" look http://bpaste.net/show/6Y9mHc9z2gdaRQDlcV40/ is the client class in another package? what is my mistake ?
[12:33:06] <ggherdov__> Hi all, problems installing pymongo on Ubuntu. I can "import pymongo", but not "from pymongo import MongoClient" look http://bpaste.net/show/6Y9mHc9z2gdaRQDlcV40/ is the client class in another package? what is my mistake ?
[13:01:56] <harenson> ggherdov__: hi, how are you installing pymongo?
[13:09:02] <ggherdov__> harenson: via apt-get . and I am on ubuntu 13.04 , which ships pymongo 2.2-4 . and the file mongo_client.py https://github.com/mongodb/mongo-python-driver/blob/master/pymongo/mongo_client.py
[13:12:03] <ggherdov__> harenson: I just finished this report that I hoped to hand to 10gen somehow to get some hint http://bpaste.net/show/wMp96gy8I66Ix1tX9OEp/
[13:12:03] <ggherdov__> but maybe is as simple as that: pymongo 2.2-4 *does not have *MongoClient"
[13:13:26] <harenson> ggherdov__: you have to exit from the actual ipython, bpython or whatever you're using
[13:13:38] <harenson> then try again with: from pymongo import MongoClient
[13:15:23] <ggherdov__> harenson: done that but no luck. But this is NOT surprising: the file mongo_client.py is *not* in my system. The MongoClient class is in that file. no file, no class.
[13:23:10] <d1b1> morning. Is this correct place to pose a index question?
[13:37:17] <harenson> ggherdov__: try opening new shell
[13:37:45] <harenson> ggherdov__: or create a new .py file
[13:41:17] <harenson> ggherdov__: create new .py file with this content http://pastebin.com/gj2vZYYG
[13:42:14] <harenson> then execute it e.g: python /path/to/file.py
[13:53:32] <pinvok3> Hey there, I'm pretty new with mongodb and I come from a long time SQL development, so it's hard to get the whole syntax. I have this layout: http://pastebin.com/U7a10gcu I want to count the errors.key.message in this array. I have two times the same message. How should the aggregation look like? I just don't get it. Thanks in advance
[13:57:53] <harenson> pinvok3: could you paste a mongodb output?
[13:58:16] <harenson> pinvok3: I mean, the same query, but, the results in json
[14:01:20] <pinvok3> harenson: http://pastebin.com/M3zhbXzP the 2048 and the 8 key is the Id of an PHP error. I think it was E_WARNING or so.. Its a simple error log and I want to have some statistics about the most occouring errors and warnings
[14:02:59] <Nodex> you should aviod naming your key's as numbers
[14:05:04] <pinvok3> Nodex: Is it a "you should" or a "its important that you don't use numbers"? It's easier to work with those numbers in php when Ive fetched the data
[14:07:35] <Nodex> they will work but when it comes to adjusting objects/arrays you will run into problems
[14:15:32] <ggherdov__> harenson: thank you for your support; I ended up re-installing pymongo with pip this time, got a newer version (2.5.something) and it went all fine. Ubuntu just packages some obsolete crap
[14:15:33] <pinvok3> Nodex: What other approach would you suggest? Save every message in a seperate collection and link an id?
[14:23:03] <Nodex> in a separate colleciton . NO, a separate document maybe
[14:33:58] <platonic> basically i was upserting based on a fullAddress object
[14:34:16] <platonic> and noticed that it's going daaaamn
[14:34:17] <Nodex> pinvok3 : how I learned Mongo quickly was forget everything I knew about SQL and treated objects like PHP arrays and or JSON objects
[14:50:54] <Nodex> make sure you drop your old index else it will take up space
[14:51:35] <platonic> yup, i'll rebuild before we start heavy development anyway - probably wise to trim the keys regardless
[14:53:48] <Nodex> I wrote a map for mine, I can feed into it what I want and it give me back the corresponding mongo key / solr key / redis key w/e
[14:56:38] <hectron> Asking once more since I got a lag spike earlier: I am going to be participating in a hackathon tomorrow and wanted to leverage MongoDB. I wanted to add functionality to an existing website, and include a Javascript file that collects user information and saves it to my db. Do I need Node.js?
[14:57:56] <Nodex> you want to take data from a website and add it to mongodb?
[15:25:27] <hectron> My registration icon for the event is the feels guy. So I think that it's definitely relevant.
[15:27:38] <kurtis> Hey guys, I am looking for a good "caching scheme". We're doing 'real-time' analysis and I'd like to use existing analysis results (cached) combined with the later results. The primary place I'm running into trouble is determining whether or not it's safe to save a "date" (or objectid) and simply grab all of the objects that came after-wards. Any suggestions?
[15:28:38] <Nodex> chuck the results in redis/memcaache and check that first?
[15:31:40] <kurtis> Nodex, Well -- I'll definitely be using redis. But, I'm trying to identify a sane methodology for separating "what's already been processed" and what's "new and needs to be processed". As a simple example -- we have millions of MongoDB documents. I need to take a single field, count the number of occurences, and sort it, and return it as a list. The more data we have, the slower this operation becomes