PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 30th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:48:50] <bakadesu> anyone around to answer some sharding questions?
[00:50:07] <Goopyo> watchu got
[01:04:40] <bakadesu> say like in a SaaS model, would client_id be a bad shard key?
[01:35:21] <Goopyo> bakadesu: SaaS models vary…. what exactly are you trying to do?
[01:36:03] <Goopyo> The curcial thing is key cardinality so if you're clinet_id gives you good chunks of data according to which you want to shard, it could be a great key
[01:36:09] <Goopyo> http://docs.mongodb.org/manual/core/sharding-internals/#sharding-internals-shard-keys
[01:37:06] <bakadesu> say a client gets really large, say 25-50% of database
[01:38:29] <Goopyo> then yeah I can see it being a good shard key. Thing is 'good shard keys' are relative to how good other keys would be which you have to investigate relative to your dataset
[01:39:15] <bakadesu> still trying to wrap my head around it, been reading some conflicting information
[01:39:50] <bakadesu> like I would think a listing of people by first name would be a decent shard key
[01:40:44] <bakadesu> but I don't know if/how things change if that listing is already sharded by client_id
[01:42:10] <bakadesu> I'm guessing that mongo allows shard keys of: client_id + first_name
[01:44:34] <bakadesu> Would taking that large client's data out of that sharded cluster and putting it into their own cluster be practical/feasible?
[03:22:34] <tystr> hi
[03:24:44] <tystr> so, I had something wonky happen to my primary node in a replica set, and now it has the "ROLLBACK" status
[03:25:03] <tystr> I've checked the documentation on what exactly that means, but I'm not quite sure how to fix it
[03:25:29] <tystr> does this mean it's in the process of rolling back and re-syncing with the new primary?
[04:02:45] <kizzx2> hey guys
[04:02:58] <kizzx2> is there an inverse operation to enableSharding?
[04:21:57] <tystr> hello, I've got a mongo node stuck in the ROLLBACK stateu
[04:22:13] <tystr> I need to somehow force this node to become the primary in the replica set
[04:33:39] <tystr> oh this is fun:
[04:33:40] <tystr> Tue Oct 30 04:28:43 DBClientCursor::init call() failed
[04:33:40] <tystr> Tue Oct 30 04:28:43 Error: Error during mongo startup. :: caused by :: 10276 DBClientBase::findN: transport error: 127.0.0.1:27017 ns: admin.$cmd query: { whatsmyuri: 1 } src/mongo/shell/mongo.js:91
[04:33:40] <tystr> exception: connect failed
[04:33:54] <tystr> does anyone know what could cause this error?
[04:38:12] <ron> umm... mongo is down?
[04:41:47] <mrpro> sir?
[04:42:37] <tystr> ...
[04:45:52] <kizzx2> hey guys
[04:46:00] <kizzx2> is there an inverse operation to enableSharding?
[05:02:11] <tystr> so, I've got a node stuck in ROLLBACK status
[05:02:14] <tystr> how can I force it out of that?
[07:49:05] <tystr> zzzzzz
[08:40:10] <[AD]Turbo> hola
[08:41:17] <Gargoyle_> morn'
[08:51:42] <lerian> hi guys, a friend is looking for a System Engineer (the middleman will get 200euros if the person is hired): http://pastebin.com/zpCTvK64
[08:59:32] <manveru> so you're the middleman? :)
[09:41:13] <NodeX> sounds like a perfect job for my skillset except the "good communication skills"
[09:41:14] <NodeX> lol
[10:40:54] <lerian> manveru no, but you can be :)
[10:42:23] <manveru> well, writing the company name and roughly what salary might help
[10:42:57] <manveru> not interested myself, just fyi :)
[10:47:59] <stdranwl> how to configure shards
[10:48:45] <Gargoyle> stdranwl: Is that tying to be a question?
[10:49:55] <stdranwl> <Gargoyle> Yes missed... wanted to know the best bit of shard key?
[11:01:04] <jawr> with aggregation, how would i $match to a particular field: http://pastie.org/5136905 i.e. to match only those with week: 45?
[11:05:28] <Gargoyle> jawr: Use a $match at the start of your pipeline!
[11:09:19] <lerian> manveru salary TBD, it's a startup not a big company
[11:14:07] <jawr> hmm, i thought i tried that, must have had the syntax wrong
[11:14:46] <jawr> ah, yeah i did
[11:14:51] <jawr> thanks Gargoyle
[12:09:06] <stdranwl> #mysql
[12:10:05] <NodeX> lol
[12:50:58] <aganov> Hi all, can I search for specific string on entire database, I have huge db (10GB) and Java application that use mongodb, I need to find where it's storing specific data/record
[12:52:55] <lucian> aganov: if you set an index for that query, it'll be reasonably fast
[12:55:34] <aganov> lucian, it does not needed to be fast, I just need to find where in the db is trored specific "string" for ex db.find("awesome") -> { objects that contains string "awesome" }
[12:56:41] <NodeX> can you define "storing"
[12:59:15] <aganov> NodeX, i have big db which is used from closed sourced Java application, that application returns some data from mongodb, I need to find in which collection is stored specific information. For example string "awesome"
[12:59:54] <NodeX> you have to loop each collection and loook for it
[13:23:44] <whitehat752> hi!
[13:24:21] <whitehat752> hi! if i have a json document in other json document and I assign that document to a variable - how can i change the inner document through the variable?
[13:24:42] <whitehat752> a = {title:test,body:{par1:blabla,par2:tata}}
[13:24:56] <NodeX> a.title='foo'
[13:25:11] <NodeX> body.par1='bleh'
[13:25:28] <whitehat752> I can do a.title = test2, but can't a.body.par1 = lalala :(
[13:25:43] <NodeX> a.foo.bar.baz.bleh...
[13:25:45] <NodeX> deep as you like
[13:26:02] <whitehat752> ok, i'll try once more
[13:27:54] <whitehat752> one more question: a = ...find{...} ... how can I see a in a mongoshell? a.pretty() doesn't work
[13:28:08] <ron> a
[13:28:11] <NodeX> foreach and printJson
[13:28:48] <NodeX> a.forEach(printjson)
[13:31:21] <whitehat752> returns blank line
[13:32:28] <whitehat752> when I assign query to a - it return one document that was found, but a.forEach(printjson) return blank line
[13:34:47] <NodeX> works fine for me
[13:34:52] <NodeX> var a=db.users.find();
[13:34:59] <NodeX> a.forEach(printjson);
[13:37:03] <whitehat752> oh
[13:37:14] <whitehat752> forgot semicolon :)
[13:37:31] <whitehat752> thanks a lot )
[13:49:43] <MatheusOl> just a line with "a" would work to
[13:49:48] <MatheusOl> at least on newer version
[13:56:38] <whitehat752> b = 3
[13:56:40] <whitehat752> b
[13:56:43] <NodeX> it's very annoying that Mongodb doesnt have a direct !=
[13:56:44] <whitehat752> returns 3
[13:57:02] <whitehat752> a = db.test.find ....
[13:57:03] <NodeX> latest version is what I'm on
[13:57:04] <whitehat752> a
[13:57:10] <NodeX> and it requires "var"
[13:57:11] <whitehat752> returns blank line
[13:57:20] <NodeX> var a =.....
[13:57:27] <whitehat752> OK
[13:58:02] <whitehat752> it worked one time ))
[13:58:14] <whitehat752> c = db.test.find...
[13:58:14] <whitehat752> c
[13:58:24] <whitehat752> return some objects
[13:58:24] <NodeX> what's the problem?
[13:58:26] <whitehat752> c
[13:58:31] <whitehat752> returns blank line )
[13:58:38] <NodeX> so put "var" infront of it
[13:58:42] <NodeX> end of problem
[14:00:16] <whitehat752> db.freenode.insert({title:"blablabla"});
[14:00:20] <whitehat752> var a = db.freenode.find();
[14:00:25] <whitehat752> a; { "_id" : ObjectId("508fdcb5552cf6e81aae813b"), "title" : "blablabla" }
[14:00:36] <whitehat752> a;
[14:00:55] <whitehat752> why doesn't it print output for the second time?
[14:01:33] <Gargoyle> whitehat752: Have you tried typing in your shell and not the IRC window!? ;)
[14:01:55] <whitehat752> it's copypaste from shell
[14:02:02] <whitehat752> it works only once
[14:02:33] <NodeX> it's probably a memory saving thing from Mongo
[14:02:55] <NodeX> storing lots of documents in an object for an unspecified time is not efficient
[14:03:22] <whitehat752> a.forEach(printjson); { "_id" : ObjectId("508fdcb5552cf6e81aae813b"), "title" : "blablabla" }
[14:03:22] <Gargoyle> whitehat752: a = the cursor, not the object.
[14:03:34] <whitehat752> it's still there...
[14:03:51] <Gargoyle> do "var x = n[0];"
[14:04:01] <Gargoyle> then you can print x as many times as you like
[14:04:19] <whitehat752> cursor? so it just "goes" to the next element, which doesn't exist?
[14:04:20] <IAD1> whitehat752: it will work with findOne oly
[14:04:22] <Gargoyle> my n = your a
[14:04:24] <IAD1> *only
[14:04:45] <Derick> Gargoyle: we're having a look at your issue btw - as long they guy isn't in a swimming pool that is
[14:04:50] <whitehat752> oh, ok
[14:04:59] <NodeX> "storing lots of documents in an object for an unspecified time is not efficient"
[14:05:13] <Gargoyle> Derick: good… and eeek!
[14:05:22] <Derick> office in NYC seems without power
[14:05:29] <NodeX> :/
[14:05:33] <Derick> at least JIRA works :-)
[14:05:48] <Gargoyle> whitehat752: After you have done var a = db.coll.find(), then you can type a.help() to find the methods on the cursor that you can use.
[14:06:36] <Gargoyle> I'm only getting 34KB/s from downloads-distro.mongodb.org
[14:07:33] <NodeX> ok one more time for posterity
[14:07:35] <NodeX> "storing lots of documents in an object for an unspecified time is not efficient"
[14:07:41] <whitehat752> tried dot notation....
[14:07:45] <whitehat752> > db.freenode.insert({_id: 1, title:"blablabla",body:{par1:"test1",par2:"test2"} > var a = db.freenode.find({_id:1}) > a.title = "newtitle"; newtitle > a.body.par1 = "newpar1"; Tue Oct 30 16:04:48 TypeError: a.body has no properties (shell):1
[14:07:52] <NodeX> the cursor is dumped after your initial request
[14:08:19] <NodeX> a is a cursor not a document
[14:08:25] <NodeX> (in that scenario)
[14:08:56] <Gargoyle> whitehat752: …find() returns a cursor, findOne() returns a document.
[14:09:16] <NodeX> a cursor -must- be looped / itterated
[14:09:42] <whitehat752> with findOne - it works :) thanks
[14:11:22] <whitehat752> for find I shold use for, foreach.. for findOne I can use variable like a document, i've got it :)
[14:13:45] <Gargoyle> Derick: I am jsut rebuilding the box I got the info for those segfaults from. But I think I can get my mac to do it as well. Just let me know if there's any more info I can dig up for you,
[14:21:04] <Derick> Gargoyle: yup, will do
[14:31:37] <NodeX> email just recieved ... "Hi, PLEASE OPEN ATTACH FILE FOR MORE DETAIL"
[14:31:54] <ron> OMG
[14:32:09] <Gargoyle> Oooh! send me a copy! not had a good virus for ages!
[14:32:13] <Gargoyle> g@rgoyle.com
[14:32:14] <NodeX> does these things even work anymore?
[14:32:21] <NodeX> do*
[14:33:04] <ron> Gargoyle: if that's your real email address, you rock beyond imagination ;)
[14:33:08] <NodeX> the best ones I get recently are Indian firms spamming me to do web design work and SEO
[14:33:22] <Gargoyle> in that case...
[14:33:34] <ron> damn, man, kudos.
[14:33:42] <NodeX> each email gets the same response, "Please give me a quote to redo my website"
[14:33:44] <Gargoyle> ga.rgoyle.com <-- my blog. (same domain!)
[14:33:49] <NodeX> nic
[14:33:51] <NodeX> nice
[14:34:20] <ron> one of my emails is being@inter.net.il ;)
[14:34:36] <Gargoyle> Not written anything for ages, been rebuilding it in symfony2.
[14:35:02] <NodeX> *shuddder*
[14:37:40] <evan_> has anyone tried to replay oplog of production in development environment to simulate load?
[14:53:22] <Gargoyle> NodeX: Is keeping the viruses all to himself! (or google blocked it)
[14:54:17] <NodeX> LOL
[14:54:22] <NodeX> oh you were serious?
[14:54:27] <Gargoyle> yeah
[14:54:51] <NodeX> sent
[14:55:22] <Gargoyle> I like poking round to see what it's trying to do! So many these days are just "come and stick your username and password in my phishing site".
[14:55:30] <NodeX> it's some word doc
[14:58:44] <NodeX> the doc is even called "Urgent & important.doc" LOL
[15:01:23] <Gargoyle> meh, just one of those nigeria fund transfer scams.
[15:01:46] <NodeX> :/
[15:01:51] <NodeX> boring
[15:04:37] <WoodsDog> we upgraded from mongo 2.0 to 2.2. we are running a replica set. in the 2.0 version, we were using a read only user to backup with mongodump.
[15:04:40] <WoodsDog> that isn't working anymore
[15:04:44] <WoodsDog> anyone know why?
[15:11:08] <timg> hi
[15:18:31] <doxavore> If background flushing is averaging 15 seconds, is there something that should be tweaked? I'm seeing periodic locking of the DB (reads and writes). I've heard background flushing is truly in the background and shouldn't affect anything, but I'm at a loss as to what's causing it.
[15:19:04] <doxavore> I've tried using a single disk, various levels of RAID, they all see disk busy% spike and the DB freeze.
[15:39:27] <jrdn> geese, replica has been in recovery and doing something for 6 hours
[16:02:58] <addisonj> hrm, so I have a large collection of log data that I just dropped, some 35 gb, will mongodb reuse those storage files, or do I still need to do a compact?
[16:03:33] <NodeX> it will; eventualy come back and page out/in
[16:05:09] <jrdn> NodeX, so our primary crashed last night and our secondaries promoted themselves… but, the secondaries somehow were stale.. so we recovered our initial primary and are trying to create new replicas for it… doing a full resync has been taking about 6 hours and there's only 12G of data
[16:05:27] <jrdn> we're going to attempt just copying the data from the primary (since we have that data snapshotted anyway)
[16:05:30] <jrdn> anything to look for
[16:06:47] <NodeX> It's not something I know alot about jrdn sorry
[16:06:49] <addisonj> make sure your oplog isn't empty on the secondary, otherwise it will just trash the data
[16:08:03] <addisonj> 6 hours though, for only 12gb... you should be done or pretty close, unless your write lock % has been consistently high
[16:08:27] <addisonj> or your network blows (but it sounds like you are on ec2)
[16:10:17] <_m> As addisonj said, you should be caught up or really close by now. I would spin up a new primary with the snapshot's data. Make sure the secondary's oplog isn't empty though.
[16:10:45] <jrdn> addisonj, yeah on ec2
[16:11:15] <jrdn> hmm i bet it is empty
[16:11:19] <jrdn> trying to see
[16:11:50] <doxavore> we sync about 500GB of data in 6 hours when turning up a new replica member - and we usually have positively horrible MongoDB performance :-/
[16:12:07] <jrdn> something sings wrong
[16:30:28] <codemagician> Is there an advantage to using Doctrine MongoDb versus writing my own customer Data Mapper patterns?
[16:30:45] <Derick> I am of the opinion you don't need an ODM with MongoDB
[16:31:13] <Derick> don't add extra layers, you can use Mongo from your models
[16:31:24] <jrdn> we were using odm, but got rid of it and started just using mongo on top of typical domain modeling
[16:31:27] <codemagician> Derick: yes, I had been thinking this
[16:33:03] <codemagician> is there a simple way to convert model classes to arrays… perhaps just overriding the __toArray() magic method and then casting the model objects before passing them into save() ?
[16:33:27] <codemagician> or using a SPL interface?
[16:33:39] <codemagician> within PHP
[16:36:09] <bhosie> i noticed here http://www.mongodb.org/display/DOCS/Excessive+Disk+Space that running repairDatabase() is blocking action. Does it block at the database level, or will it block the whole mongod process?
[16:39:50] <jrdn> codemagician_, we have repositories, services, and domain models.. repositories only do persistence to mongo, services use the repository, finds hydrate the domain models, and saves "dehydrate" them… you can use toArray() or explicitly get what you need in each save method.
[16:40:25] <jrdn> addisonj, _m, so if the oplog count is 0 on the replica that's trying to resync.. that means it's F'd right?
[16:42:02] <codemagician_> jrdn: is the MongoDb PHP API enough that I don't need to write an abstraction layer (such as a data mapper pattern) between my controllers and it
[16:42:37] <jrdn> yeah
[16:43:00] <jrdn> as Derick said, you can just put the document data into a $data property in your domain model
[16:43:42] <codemagician_> jrdn: if for example I have a class User { private $_id; private $data1; .. } could I just do a $db->save($user). If the value of _id is null, will it insert and if it's set to a MongoDb object will it update?
[16:46:12] <_m> codemagician_: http://php.net/manual/en/mongocollection.save.php
[16:46:38] <Derick> codemagician_: $_id will automatically be added
[16:48:07] <_m> Also see: https://github.com/mongodb/mongo-php-driver/blob/master/collection.c#L1090-1126
[16:48:42] <codemagician_> What about if my app has a model hierarchical and a child and parent model changes, would I then gain from using a ODM like Doctrine?
[16:48:44] <Derick> dunno if it actually will update the $_id property of the object
[16:49:15] <NodeX> all these fancy words
[16:50:13] <codemagician_> or will the standard MongoDb know which fields have changed and only update those
[16:50:27] <Derick> no
[16:50:32] <codemagician_> *MongoDb PHP implementation
[16:50:38] <Derick> still not :-)
[16:51:22] <NodeX> upserting is done with a query, if the query is matched then it updates else inserts
[16:51:29] <NodeX> if that explains it any better for you
[16:51:32] <codemagician_> say I have a tree of objects like A which contains B, C and D. If I delete D and update C. Then I save A, will MongoDb PHP just take care of all this?
[16:51:41] <Derick> no
[16:51:45] <codemagician_> i.e. will it chain down
[16:51:46] <Derick> there is no magic in the driver
[16:52:03] <codemagician_> so then, I will end up with a fat controller scanning for changes?
[16:52:16] <_m> codemagician_: I find it easier to use an ORM for larger applications as generally some amount of an ORM's functionality will need to be reimplemented within my stack.
[16:52:21] <codemagician_> which I why I had leaned towards having an abstraction layer
[16:52:37] <Derick> remember though that in mongodb, you don't have relations between collections
[16:53:01] <codemagician_> _m, do you mean ODM?
[16:53:42] <codemagician_> I thought ORM's were for impedance mismatch when dealing with RDMS?
[16:55:10] <_m> Yes, ODM.
[16:57:35] <jrdn> We're running doctrine's ODM in production right now… it was a nightmare at first but seems "okay" now
[16:57:49] <MatheusOl> Is it possible to do a backward iteration on a cursor? Like: my_cursor.previous()?
[16:57:55] <NodeX> dont you find it bloated?
[16:57:58] <jrdn> yep
[16:57:59] <jrdn> super
[16:58:54] <jrdn> we still use domain objects and custom mapping to ease validation and provide something to the view (so we show 'name' instead of 'n' / 'createdAt' instead of 'ca', etc)
[16:59:10] <jrdn> but it's just one file no instead of 1000000012398489072398074
[16:59:35] <jrdn> and then a custom cursor to do object mapping… but where our app needs speed, we just use raw mongo.
[16:59:43] <jrdn> so far it's been less "magical"
[17:25:11] <jrdn> what causes the "replSet not trying to sync from *, it is vetoed for N more seconds"
[17:34:37] <TecnoBrat> do we have a realistic timeline for 2.2.1? every day I look its pushed back another day.
[17:35:01] <tomreyn> hi, i'm trying to back up all mongo databases (ideally including any relevant meta data such as permissions) on a server using mongodump --username $DBUSER --password $DBPASSWORD --out $BAKDIR/$TIMESTAMP
[17:35:12] <ron> TecnoBrat: so it's consistent.
[17:35:16] <tomreyn> does this sound reasonable or am i missing something important there?
[17:35:40] <diegok> is friedo around? (mantainer of perl driver)
[17:35:45] <tomreyn> i'm aksing because /var/lib/mongodb is 11 GB which my backup is only 1.1 GB
[17:36:09] <tomreyn> i'm aksing because /var/lib/mongodb is 11 GB while my backup is only 1.1 GB
[17:36:30] <diegok> tomreyn: it is. Mine is ~30gb -> 7gb
[17:36:49] <tomreyn> great, thank you diegok
[17:37:17] <diegok> tomreyn: till I can't confirm you're doing it well tho :-/
[17:37:18] <diegok> :)
[17:37:59] <tomreyn> diegok: of course. does the command i'm using look reasonable, though?
[17:38:15] <tomreyn> what do/would you use?
[17:39:04] <tomreyn> and finally, should i expect these .bson files to be well compressable, so i can reduce the filesize even further?
[17:40:00] <jrdn> is the oplog supposed to be empty when doing an initial sync?
[17:41:10] <diegok> tomreyn: I'm using a variant of https://github.com/gwaldo/mongoBackup (I can't find the one I've used as my base)
[17:42:01] <diegok> tomreyn: but yes. the command looks fine -> https://github.com/gwaldo/mongoBackup/blob/master/mongoBackup#L30
[17:42:48] <TecnoBrat> ron: consistent, sure :P
[17:42:57] <ron> TecnoBrat: ;)
[17:43:53] <NodeX> consistently late :P
[17:44:45] <tomreyn> muchas gracias, diego!
[17:54:15] <ron> mine was more painful.
[17:54:16] <ron> :p
[17:55:12] <Gargoyle> And you'll stag there until you behave! ;)
[17:55:26] <ron> Okay, that's just sick.
[17:55:43] <NodeX> stag : fail
[17:55:59] <NodeX> Gargoyle : you from liverpool right?
[17:56:05] <Gargoyle> yup
[17:56:11] <NodeX> How come you got a job then :P
[17:56:14] <NodeX> lmao
[17:56:22] <NodeX> (joking)
[17:56:46] <Gargoyle> ha ha. 'cos your mum stopped paying me! ;-)
[17:57:04] <NodeX> man, do you want me to phone her and tell her to pay up?
[17:57:12] <NodeX> I'm so sorry about that
[17:57:20] <Gargoyle> :D
[17:57:36] <NodeX> I'm coming to liverpool next weekend
[17:57:45] <NodeX> Xmas shoppping with Mrs Nodex :(
[17:57:49] <Gargoyle> Ohh! what for?
[17:58:02] <Gargoyle> ahhh. never mind.
[17:58:10] <NodeX> and on the 18th, Flying on holiday
[17:58:20] <NodeX> although SPeke is not Liverpool
[17:59:05] <ron> omg omg omg you can meet and be bffs omg omg omg
[17:59:11] <Gargoyle> NodeX: Where you from again?
[17:59:13] <NodeX> omg like totaly
[17:59:20] <NodeX> I live in Northwich
[17:59:23] <jrdn> Y NO REPLICATION RESYNC WORK
[17:59:24] <NodeX> or just outside
[17:59:30] <jrdn> could my data be corrupt? if so how can i check?
[17:59:39] <NodeX> about 21 miles from Liverpool iirc
[19:03:54] <LouisT> I'm trying to allow users to search multiple fields in a database using a string or regex.. what would be the safest way to do user supplied regex?
[19:11:13] <crudson> LouisT: sanitizing input doesn't get you anything. you can strip out whatever for peace of mind, but there is no 'bson injection' or such. If you want to validate that a regex is valid or conforms to some specific rules that you want, you'll have to do that in your application; the options available to you will depend on the language being used.
[19:13:02] <LouisT> Well it'll be PHP, but my issue is that I planned on using $where with a function to check multiple fields, I'm just not sure if it'd be easy/possible for them to exploit it.. So I figured someone else would know more than I do.
[19:15:56] <NodeX> http://www.youtube.com/watch?v=IOtOVe8aykk
[19:16:33] <_m> LouisT: I would recommend using ElasticSearch for indexing/searching things.
[19:16:47] <NodeX> that is funny LOL
[19:16:59] <_m> NodeX: Using ElasticSearch?
[19:17:17] <NodeX> no, my youtube post
[19:17:23] <_m> Ahh.
[19:17:57] <NodeX> but personaly out of ES and SOlr I chose solr simply becuase it was less buggy and a more mature project
[19:17:58] <_m> http://i.imgur.com/GrI0m.gif
[19:18:46] <LouisT> _m: I've never heard of that
[19:19:35] <_m> LouisT: http://www.elasticsearch.org/
[19:20:36] <LouisT> _m: yyeeaaa, that's far more work than needed
[19:21:29] <_m> Depends on how much locking/scanning you can afford on your Mongo cluster.
[19:22:37] <LouisT> a lot heh
[19:22:57] <LouisT> it's not under constant use so it wont be an issue
[19:23:17] <LouisT> also there aren't many records so that'll be fine as well
[20:24:26] <therealkoopa> I'm using a shortId for a _id field. Should I use something specific for the type? Or is string okay
[20:37:32] <Gargoyle> Ooh! Has mongo 2.2.1 just been released?
[20:37:54] <JoeC> yes
[21:34:58] <tystr> hello
[21:35:16] <tystr> what are some ways you guys monitor the replication status of a replica set
[21:35:41] <tystr> more specifically, how would you trigger/receive alerts if a members becomes out of sync, or primary steps down, etc
[21:36:25] <LouisT> is this incorrect? {"$or":[{"User":{"$regex":"\/louis\/i"}},{"Desc":{"$regex":"\/louis\/i"}}]}
[21:36:42] <LouisT> for some reason it returns nothing
[21:36:43] <Gargoyle> tystr: mms.10gen.com
[21:38:04] <tystr> yes we've set up mms monitoring
[21:38:21] <tystr> but it doesn't seem to flexible as far as notifications go
[21:55:17] <meghan> friendly reminder for the west coast people here MongoSV (silicon valley mongodb conference) early bird is ending this week http://www.10gen.com/events/mongosv
[21:57:56] <bakadesu> thank you meghan
[22:02:45] <Zelest> NodeX, looks like my hatred for debian/ubuntu remains.. ugh. :P
[22:37:32] <alx___> is it possible to sure a matrix in mongodb
[23:10:17] <aboudreault> do you sometime manage your app users inside mongodb, and not just the app itself (big data, etc.) ?
[23:12:11] <drunkgoat> i need some help with node-mongodb. i'm doing collection.find({_id:{'$in':userList}}).toArray(function(err, users) {} ). this should return all users with _id in userList[]. is that right?
[23:13:51] <wereHamster> yes
[23:15:02] <drunkgoat> ok so i must be doing something else wrong :P thanks
[23:15:21] <wereHamster> drunkgoat: lemme guess, userList is an array of strings
[23:15:32] <drunkgoat> no no it objectids
[23:15:45] <wereHamster> does that command work on the mongo shell?
[23:17:13] <drunkgoat> ummm never tried. i'll check
[23:19:18] <drunkgoat> yes it does
[23:19:35] <drunkgoat> so it must be unrelated to mongo
[23:55:23] <jin> I have a question about replica sets with 4 nodes. I am looking to configure a secondary node to be a backup by setting a slaveDelay. However, I can't find any documentation to set this backup node to read only from the other secondary, so that it won't bog down on the primary. I would appreciate if you have some pointers.