PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 7th of May, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:33:20] <heloyou> so i have essentially trees of data, all nodes are associated by reference. what would be the best way to delete an entire tree?
[03:40:07] <hahuang65> is it normal for a namespace query to take 9 hours?
[03:49:32] <daole> Hi, if I have a shared collection with only 1 shard
[03:49:46] <daole> Can I remove the config servers and connect directly to that mongod
[03:49:58] <daole> Since we don't want to use shard anymore
[03:50:50] <daole> Hi, anyone know this?
[04:58:10] <krz> hi, anyone use mongodb with rails/ruby? wondering what the difference is between mongoid and moped
[06:23:20] <newbsduser> hello, sometimes mongodb instance is not answering... it says: connecting to: localhost:27017/test
[06:23:42] <newbsduser> but no answer.. actually 27017 is up.. and there is no error in logs
[06:24:29] <newbsduser> db version v2.2.0, pdfile version 4.5 - what do you suggest...?
[06:31:18] <newbsduser> when i tried: mongostat --discover -h localhost:27017... it says : "localhost:27017 no data"
[06:33:10] <newbsduser> truss output: http://codepad.org/Ek5xmvTj
[07:25:36] <svm_invictvs> Using the Java drivers for MongoDB, I was curious if will properly treat the Collection types as lists when I set a List (or other collection) to a MongoDB type.
[07:26:22] <svm_invictvs> Oh hell. Reading is Fundamental: Note: MongoDB will also create arrays from java.util.Lists.\
[07:40:07] <soheilpro> is there any way to know when mongo has finished initializing the journal for the first time?
[07:40:28] <Zelest> doesn't it write about it in the logs?
[07:41:02] <soheilpro> it does but i need to know that in my script so that I can init the replic a set
[07:41:25] <Zelest> ah
[07:41:26] <Zelest> no idea :/
[07:41:49] <soheilpro> no problem, thanks
[07:41:52] <[AD]Turbo> ciao all
[07:48:11] <Jaymin> DBObject object = new BasicDBObject();
[07:48:11] <Jaymin> object.put("date", new Date());
[07:48:11] <Jaymin> String serialize = JSON.serialize(object);
[07:48:11] <Jaymin> transactionCollection.insert(object);
[07:48:11] <Jaymin> Inserts date as ISO date instead of { "$date" : "2013-02-07T09:09:09.212Z"} any idea what could be the issue ?
[07:49:01] <Jaymin> if I print serialize then I do see required format i.e. { "$date" : "2013-02-07T09:09:09.212Z"}
[07:49:45] <Jaymin> but DB insert insert date with ISO i.e. ISODate("2013-05-07T07:36:02.463Z")
[08:28:14] <jbd> hi, i'm planning to migrate an old replicaset from 1.6 to 2.4.x, what would be the best way ?
[08:28:54] <jbd> naturally, I'd like to avoid downtimes
[08:44:37] <soheilpro> I'm trying to setup a replica set on ec2
[08:45:11] <soheilpro> my primary can see and connect the secondary
[08:45:25] <soheilpro> but when I add it to the repl
[08:45:58] <soheilpro> it stays in the "still initializing" mode
[08:46:18] <soheilpro> it doesn't receive any heartbeat
[08:46:27] <krz> Error: journal files are present in journal directory, yet starting without journaling enabled.
[08:46:27] <krz> It is recommended that you start with journaling enabled so that recovery may occur.
[08:46:35] <krz> how the heck do i start with journaling enabled??
[08:46:45] <krz> I'm trying to do mongod --repair
[08:46:59] <soheilpro> any suggestions?
[08:47:45] <soheilpro> try --journal
[08:48:01] <krz> mongod --journal ?
[08:48:13] <soheilpro> yup
[08:48:44] <krz> i get https://gist.github.com/krzkrzkrz/f1c507c23bd9278ed816
[08:48:57] <krz> should i delete my journal file?
[08:51:20] <soheilpro> no imo
[08:52:33] <krz> how do i remove mongod.lock?
[08:53:05] <`3rdEden> rm -rf / mongod.lock ;o?
[08:53:35] <Berge> `3rdEden: Generally, it's not very nice to say stuff like that on IRC. Less experienced people might be prone to run it.
[08:55:19] <`3rdEden> my appologies if you formatted your drive.
[08:55:42] <Berge> I didn't. (-:
[08:55:47] <`3rdEden> ;D but removing is generally a solution ;)
[08:56:14] <Berge> I don't know mongodb (I'm here to ask questions), but blindly removing lock files isn't generally a solution, no (-:
[09:01:30] <krz> `3rdEden: you know that middle finger on your hand? you can shove it up your arse
[09:01:41] <krz> :_D
[09:02:21] <`3rdEden> oh, boohoo
[09:02:22] <krz> seriously you should though. not nice giving out a command like that as Berge pointed out
[09:37:58] <krz> what the heck does this mean: https://gist.github.com/krzkrzkrz/f1c507c23bd9278ed816
[09:38:07] <krz> can't these errors be anymore useful?
[09:38:38] <ron> errors are not meant to be useful. errors are mistakes. you should fix errors.
[09:39:05] <krz> no kidding
[09:39:12] <ron> nope. dead serious.
[09:39:32] <krz> where does one start with an error like that?!
[09:39:40] <krz> dazzle me gandolf
[09:43:25] <ron> krz: https://jira.mongodb.org/browse/SERVER-5380
[09:43:55] <krz> read that. aint that helpful
[09:44:24] <krz> doesn't do jack ron
[09:44:54] <ron> is it a production server?
[09:44:58] <krz> local
[09:45:31] <krz> well deleting journaling and mongod.lock works!
[09:46:05] <ron> well, duh
[09:46:41] <krz> no not really duh. i had to look in /usr/local/var/mongodb for those files. lucky guess
[09:46:52] <ron> oh, boo hoo.
[09:47:19] <krz> :_D
[10:57:35] <krz> I've got an embedded doc. { songs => { 1 => {..}, 2 => {…} } } how can i return all objects of song.1
[10:57:46] <krz> objects = fields + values
[11:07:29] <krz> collection.find('songs.1' => {'$exists' => true}).first returns the whole doc
[13:33:11] <rybnik> Hi there fellas, anyone got a spare minute? I've a few questions…. is there a way to perform something like a $where using the aggregation framework ?
[13:34:24] <ron> I love it when people confuse irc with a frontal chat.
[13:34:27] <harenson> rybnik: explain yourself
[13:34:35] <harenson> ron: lol
[13:35:56] <Derick> rybnik: no
[13:36:10] <Derick> $where is done through javascript, and the A/F is meant to avoid that
[13:36:59] <rybnik> Thank you Derick, and thank you harenson for your time. Please allow me to elaborate further….
[13:41:23] <rybnik> I've a collection fill with documents with a structure similar to the one represented here http://pastie.org/7813118
[13:43:13] <rybnik> my goal is to be able to properly find "rawEvents.wt" : "1367494483266-X1.XXX.XX.97-75119", retrieve that object and use the timestamp to "project" the searchHistory objects which "timestamp" matches the ts field
[13:43:43] <Derick> you mean you want the value of "ts" to become a key?
[13:44:07] <rybnik> I have some progress using $elemMatch, and something like the positional operator $ would be helpful, but I'm running out of ideas, so it would be nice to be able to unwind and then $where
[13:44:18] <rybnik> @Derick that sums it up
[13:45:48] <Derick> You shouldn't have undescriptive (value) keys... it's bad practise
[13:46:32] <rybnik> I understand that, I also understand that $where is evil, but I was given some lemons and I must prepare some nice apple juice
[13:47:01] <rybnik> Derick, do you have any hint that would point me into the right direction ?
[13:47:24] <Derick> No, as I can't think of a way...
[13:47:47] <Berge> Does the write concern "replica acknowledged" imply that the data has been written to (the journal of) the master node in addition to being sent to a number of replicas?
[13:47:56] <Derick> Berge: yes
[13:47:56] <Berge> Or does it just mean "it's been seen by a number of nodes, including master, and will be written soon"? (In which case the data is lost upon a total power failure, for instance.)
[13:48:02] <Berge> Derick: ah, right
[13:48:04] <rybnik> Well thank you anyway, you were most helpful, have a nice day! :)
[13:48:06] <Berge> The docs are a bit unclear on that issue.
[13:48:22] <Derick> Berge: to be safe, also set j=true though!
[13:48:28] <Berge> j as in journaling?
[13:48:36] <Derick> hmm
[13:48:37] <cloudgeek> I am designing schema for mongodb , how i can write json in viewer, or any bson opener. Guide to how make a schema
[13:48:40] <cloudgeek> in it?
[13:48:43] <Derick> actually, let me retract my answer Berge
[13:49:30] <Berge> You may. (-:
[13:49:33] <whaley> cloudgeek: just type out some json in a text editor?
[13:49:44] <Derick> Berge: for replicaset acknowledged, it's written to primary (memory and oplog) and secondaries (memory)
[13:50:01] <cloudgeek> whaley: i am using vim but it showing some kind of encoding ?
[13:50:10] <Berge> What does it mean for crash recovery that data has been written to the oplog?
[13:50:16] <whaley> cloudgeek: unless I've completely missed the boat, this is how I've been doing "schema" documentation
[13:50:18] <Berge> Has it been fsync-ed to persistent storage?
[13:50:18] <Derick> Berge: it's also written to the journal, but not flushed until the normal journal flush happens
[13:50:44] <Derick> Berge: for crash recover, it matters nothing that it's written to the oplog or not
[13:50:57] <Derick> however, the journal does store writes to the oplog
[13:50:59] <Berge> As in: Is there a window where a total power outage will cause data to be lost?
[13:51:16] <cloudgeek> whaley: okay. i need help in schema design how to approch it
[13:51:21] <Derick> Berge: yup, up to the journal sync time
[13:51:44] <Derick> but of course, the client knows about this
[13:51:51] <Berge> Is there then a way of telling MongoDB "this data has to be on at least one disk and in memory on a different node"?
[13:52:12] <Derick> Berge: yes, you can use fsync=true + w=2 for that
[13:52:13] <whaley> cloudgeek: for some general reading, I'd suggest http://shop.oreilly.com/product/0636920027041.do to get some ideas
[13:52:14] <Derick> but that's slow
[13:52:26] <Berge> I'm coming from a RDBMS background, so this lack of commitment to data and the client's total control of the commitment feels very new and odd (-:
[13:52:34] <Berge> Derick: ah, I can. Good, thanks.
[13:52:47] <Berge> Sure it's slow. It's a hard problem to get fast. (-:
[13:52:50] <Derick> Berge: you do however not want to turn fsync on if you care about any sort of performance
[13:53:02] <Derick> w=majority is going to be good enough
[13:53:10] <Berge> So if I do care about both performance and data durability, I do what?
[13:53:23] <Berge> w=majority protects against random node failure, but not total cluster loss, aiui?
[13:53:24] <cloudgeek> whaley: thanks
[13:53:37] <Derick> Berge: right.
[13:53:53] <Derick> Berge: but the client of course will realise that, as the GLE command has timedout
[13:54:05] <Derick> unless it's on the same cluster...
[13:54:13] <Derick> but then you're basically screwed anyway
[13:54:27] <Berge> Yep, but waiting for timeouts isn't very latency- and performance friendly either.
[13:54:41] <Derick> nope, but w=majority is pretty fast
[13:55:04] <Berge> Sure, but then there's this window of data loss if the cluster goes down all at once.
[13:55:24] <Berge> It's a tradeoff you'll have to consider you're willing to take, of course.
[13:56:44] <Derick> Berge: yes, but an RDBMS handling a total outage on a cluster has the same issue...
[13:56:57] <Derick> and try scaling that as easily
[13:57:29] <Berge> Derick: Actually, no, as you can configure (most) RDBMSes to let a returned COMMIT mean for instance "commited to disk on master and sent to slave" or "commited to disk on master and slave".
[13:57:42] <Berge> The latter is of course not a performance winner.
[13:58:19] <Derick> right, and that's the same with mongo where you can do fsync and w=majority
[13:58:43] <Berge> yep
[13:58:47] <Derick> but there is no "secondaries also need a disk commit" right now
[13:59:17] <Berge> Good to know.
[13:59:31] <Berge> It might not be a requirement, but it's always good to know what options there are.
[14:01:17] <Berge> Derick: Thanks a lot, this cleared things nicely up for me!
[14:01:47] <Derick> np!
[14:04:04] <wjb> I'm trying to turn on a unique index on a path within an array, but Mongo tells that there are duplicate keys with with value null. I'm having trouble querying to find them.
[14:04:22] <wjb> How do you query for null values within docs in an array?
[14:04:29] <wjb> I tried the following:
[14:04:31] <wjb> db.words.find( { "images": { "path": null } } );
[14:04:35] <wjb> db.words.find( { "images": { "path": { $type: 10 } } });
[14:04:37] <wjb> db.words.find( { "images": { $not: { $exists: "path" } } } )
[14:04:40] <wjb> db.words.find( { "images": { $elemMatch: { "path": null } } })
[14:04:44] <wjb> db.words.find( { "images": { $elemMatch: { "path": { $type: 10 } } } });
[14:04:55] <wjb> All return no results.
[14:05:18] <wjb> schema is doc: images: [ path: "" ]
[14:07:36] <Derick> "" is not null, it's an empty string
[14:07:54] <Derick> also, null is used for docs where the field is *not* present
[14:08:23] <Derick> db.words.find( { "images": { $exists: false } } )
[14:08:30] <Derick> db.words.find( { "images.path": { $exists: false } } )
[14:08:31] <Derick> sorry
[14:08:39] <Derick> that should do
[14:11:07] <wjb> Yup, works. Cheers!
[14:13:52] <Garo_> Is it normal that the oplog is almost completely in res memory (ie. it's "hot") in a master of a replica set. I used mongomem to determine that currently around 95% of the oplog collection is mapped into physical memory.
[14:30:05] <theRoUS> i'm getting unexpected segfaults and stacktraces wit no databases: http://pastie.org/7813310
[15:21:02] <Almindor> is there a way to improve mongodb caching?
[15:21:45] <Almindor> I am comparing a 200 million collection query on postgreSQL vs M$SQL vs mongodb and mongo seems to be best on the initial (non-cached, first time executed) query, but it never improves it's time on subsequents
[15:22:14] <Almindor> I get ~14s time on mongo constantly, while postgresql for example is 120s non-cached but 400ms cached
[15:22:55] <Almindor> the query only gets ~3000 documents out of the 200 million and uses geoindexing on postgres and mongodb, normal numeric indexing on MSSQL
[15:23:18] <x1a0> Hi, I am using Mongoose. For most find* query I want the result to be an object of which key is the ObjectId. Is there a switch for that? or what's the best place it make it?
[15:23:55] <Berge> Almindor: Using postgis?
[15:24:00] <Almindor> Berge: yes
[15:24:15] <Berge> Almindor: 120 seconds seems very slow for indexed queries returning 3k rows.
[15:24:42] <Almindor> let me paste the analyze explain
[15:24:51] <Berge> oh, 400ms with data in cahce.
[15:24:53] <Berge> cache, even
[15:26:05] <Berge> Almindor: I don't mean to do postgres support here, but fwiw is #postgresql a great community.
[15:26:25] <Almindor> Berge: http://pastebin.com/Xqb2fEip - this is after talking to the guys at #postgresql
[15:26:31] <Almindor> the 400 ms is great tho
[15:26:35] <Almindor> but mongo is a dissapointment
[15:26:39] <Almindor> it doesn't seem to cache at all
[15:26:45] <Almindor> 1st find is the same as any after
[15:26:57] <Almindor> and I'm not even out of ram
[15:27:17] <Berge> Almindor: 400ms seems good indeed.
[15:27:25] <Almindor> oddly enough MSSQL is winning without a geoindex
[15:27:25] <Berge> At least if you can keep indexes in RAm.
[15:27:39] <Almindor> it's using plain int indexes on lat/long (made big to be ints) and it's fastest
[15:28:26] <Berge> But I'll stop getting in the way of mongodb performance questions, sorry (-:
[15:28:45] <Almindor> heh no worries
[15:28:49] <Berge> fwiw; mongodb uses almost exclusivly mmaped files.
[15:29:06] <Almindor> well so does postgres on linux AFAIK
[15:29:12] <Berge> So it takes advantage of OS caching.
[15:29:15] <Berge> So does postgres, true.
[15:29:22] <Almindor> I increased shmax and all that
[15:29:25] <Almindor> it did help a bit
[15:29:26] <Berge> Are you sure the mongodb query is IO bound?
[15:29:32] <Almindor> but for mongo it improved the write-side of things only
[15:29:48] <Almindor> let me get the mongodb explain
[15:31:31] <Almindor> http://pastebin.com/nHUYwby1
[15:32:13] <Almindor> if it was this time on 1st query it'd be a very nice result, but having it on subsequents is unusable
[15:34:03] <Almindor> MongoDB does not implement a query cache: MongoDB serves all queries directly from the indexes and/or data files
[15:34:07] <Almindor> I guess that explains it
[15:34:56] <Berge> Almindor: Postgres doesn't have a query cache either, fwiw.
[15:36:44] <Almindor> well, but it handles the subsequent querries much better ;)
[15:36:54] <Almindor> note that it's running on the same machine, different drive tho
[15:37:02] <Almindor> I'm not doing them concurrently obviously
[15:37:11] <Almindor> both use SSDs
[15:55:19] <rybnik> Hi! Anyone know if there's any difference at all between require('mongodb').ObjectId vs db.bson_serializer.ObjectID.createFromHexString
[15:55:27] <rybnik> ?
[16:15:53] <abstrusenick> getting this error when doing mongodump, "locale::facet::_S_create_c_locale name not valid"
[17:47:48] <adamlynch> Hey, is possible to have MongoDB running during tests on hosted continuous integration services like CloudBees, Travis-CI, etc?
[17:47:56] <adamlynch> P.S. I'd be using PHP to interact
[18:52:43] <davlaps> hi folks!
[18:53:13] <davlaps> i'm new to mongo.. i am currently converting my RDBMS over to mongo. is it worth using Ming for schema management/migration?
[19:07:04] <jgspratt> Hello. How does one configure MongoDB security?
[19:07:12] <jgspratt> Can you restirct access by IP range or something?
[19:09:41] <apetresc> jgspratt, you can do that at the networking layer if you want, just use iptables or something
[19:10:04] <jgspratt> I am trying to see how it is configured currently
[19:10:17] <jgspratt> Does mongo not restrict access?
[19:10:25] <jgspratt> This is mongo 1.8
[19:10:47] <jgspratt> I don't see anything in server.cfg or any "users" in the system
[19:11:12] <leifw> can there be multiple chunk migrations occurring simultaneously within a cluster, if they are all to and from completely distinct shards? that is, can I have a migration from shard A to shard B and another migration from shard C to shard D?
[19:11:38] <jgspratt> "Firewall is stopped."
[19:13:26] <apetresc> Mongo itself has username/passoword auth, and the very latest version has some sort of Kerberos/LDAP integration or something like that
[19:13:48] <apetresc> You set up the users and passwords in the admin table
[19:14:53] <apetresc> jgspratt: look into http://docs.mongodb.org/manual/reference/method/db.addUser/ and associated docs
[19:17:15] <jgspratt> apetresc: http://hastebin.com/viyibihupa.hs shows my admin table is empty?
[19:17:54] <jgspratt> Does Mongo use "default allow" security technology? Since there are no users, everyone can use it?
[19:49:16] <unr3al011> hey i am looking for an alternative to redis. i have a main db in mysql and actually copy them into ram via redis and work with a redis db then. i have to do alot of queries in large tables, and i would like to know if mongodb is also ram driven and an alternative for me? can sb. give me any advice please?
[19:57:47] <JanxSpirit> where does mongo hide replica set config?
[19:58:24] <Zelest> in the hidden secret mongo replica set config unit!
[19:58:28] <JanxSpirit> I had a replica set up and running to test, but needed to move the data
[19:58:31] <Zelest> where is nowhere to be found!
[19:58:43] <Zelest> sorry.. I'll stop :(
[19:59:01] <JanxSpirit> i brought all nodes down, blew away my old db directory and started with a new directory
[19:59:38] <JanxSpirit> all the nodes are still trying to contact one another based on old settings and they get wedged such that I cannot Ctrl-C out and have to kill -9 them
[19:59:58] <JanxSpirit> again, I blew away my dbpath
[20:00:14] <JanxSpirit> so it must be somewhere else
[20:01:13] <JanxSpirit> it's unusual to me that I can't do the kind of thing I'm trying to do more easily, but at this point I'd like to just go back to clean slate as the replica set was easy to set up initially
[20:03:23] <kali> JanxSpirit: it's in the "local" database
[20:08:18] <JanxSpirit> kali - where is that on my filesystem?
[20:09:15] <kali> in mongodb dbpath
[20:09:28] <JanxSpirit> but I blew that away and restarted
[20:09:54] <kali> well, that's all there is :)
[20:10:15] <JanxSpirit> that's what I expected - that's the only place mongo ever has anything
[20:10:29] <JanxSpirit> so how in the world is it remembering old replica set config
[20:10:34] <JanxSpirit> and also why is it wedging
[20:10:41] <kali> i'm guessing... the other nodes
[20:12:25] <JanxSpirit> ok - I'll go on a node manhunt again - just wanted to make sure replica sets didn't introduce something new - I haven't messed with them before
[20:12:28] <JanxSpirit> thanks
[20:13:10] <kali> stop them all, blast all the dbpath... you should be in a clean state :)
[20:18:11] <JanxSpirit> aha! one of the nodes was not dead - that oversight has been solved...thanks again kali
[20:19:14] <JanxSpirit> but it still makes me a bit sad that it wedged - I thought this sort of durability was the whole idea
[20:19:25] <JanxSpirit> but I'll read up more and see what's going on
[20:50:29] <jgspratt> which mongo should I install if I want to connec to a remote mongo?
[20:50:31] <jgspratt> http://hastebin.com/tuvoxaxitu.avrasm
[21:04:04] <leifw> In a sharded cluster, what happens if one of the config servers goes down and has different data than the rest of them? Will the first mongos to connect to it bring it back up to date?
[21:21:37] <leifw> looks like the mongos will fail to start and you have to recover the config servers manually
[21:28:03] <jgspratt> which mongo should I install if I want to connec to a remote mongo? http://hastebin.com/tuvoxaxitu.avrasm
[21:30:32] <leifw> jgspratt: I think you want 'mongodb', you don't need 'mongodb-server'. It looks like the python, perl, and php drivers are there as well as mongoose which is a driver for node.js, if you want to use it through any of those languages instead
[21:30:47] <jgspratt> Oh, ok.
[21:31:09] <jgspratt> I am having this problem: http://hastebin.com/hotideyejo.hs
[21:31:16] <jgspratt> After that it just quits
[21:31:29] <jgspratt> This is with mongo 1.8
[21:33:10] <leifw> looks like a linking error
[21:33:56] <leifw> not sure what you should do
[21:34:21] <jgspratt> what kind of a link?
[21:34:30] <jgspratt> internal datastructure link?
[21:34:34] <jgspratt> network link?
[21:34:38] <jgspratt> chain link?
[21:34:40] <leifw> like a "compiling and linking your executable" error
[21:34:54] <jgspratt> Oh, really, hm.
[21:35:09] <leifw> the mongo shell looks like it wants to call a function in libpcre (a regex library) but it can't find that function
[21:35:11] <jgspratt> is mongo 2 beta?
[21:35:23] <kali> leifw: you should consider using 10gen rpm packages
[21:35:28] <leifw> maybe you just have to install libpcre, but I'd think the package manager would have done that for you
[21:35:41] <leifw> kali: talk to jgspratt
[21:35:52] <leifw> kali: I'm not the one with the problem
[21:35:54] <jgspratt> can't seem to find a 1.8
[21:36:07] <jgspratt> 1.8 must be old or something
[21:36:15] <kali> jgspratt: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/
[21:36:24] <kali> leifw: sorry :)
[21:36:31] <kali> jgspratt: 1.8 is ancient.
[21:36:31] <leifw> kali: :)
[21:37:19] <jgspratt> Could it be incompatibility between 1.8 and 2.2.3?
[21:37:40] <leifw> jgspratt: that error can't be
[21:37:59] <kali> jgspratt: nope this is a poor package
[21:38:30] <kali> jgspratt: seriously, use 10gen repositories...
[21:38:39] <jgspratt> switching right now
[21:38:51] <kali> jgspratt: struggling so hard to get 1.8 working is not worth it :)
[21:39:37] <jgspratt> oh, it was incompatibility. 1.8 by 10gen works with 1.8 server
[21:39:43] <jgspratt> no error
[21:39:49] <leifw> kali: it's getting an undefined reference in the 2.2.3 mongo shell though, I don't think that's a problem with 1.8
[21:40:02] <jgspratt> but maybe 2.2 works with 1.8 if it is from 10gen
[21:40:57] <jgspratt> I'm going to go hang out with 10gen in a bit$
[21:41:12] <kali> leifw: nope, rotten package with badly defined dependencies
[21:42:09] <kali> leifw: wrong version of pcre, presumably. 10gen packages have a statically linked pcre
[21:42:32] <jgspratt> yep, the 10gen 2.0 one doesn't have that problem!
[21:42:40] <jgspratt> this is really great
[21:54:58] <JanxSpirit> ok all members of my replica set are down except 1 and it is SECONDARY - should this be possible?
[22:19:20] <giulivo> hi gents, while working on a map/reduce job
[22:19:58] <giulivo> we're able to refer to the bson object keys by using "this.key" but not by using "this['key']"
[22:20:15] <giulivo> is that some sort of bug or is it the expected behaviour?
[22:21:28] <giulivo> flaper87, ^^ please! :)
[22:21:46] <flaper87> giulivo: hey :)