PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 2nd of August, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:29] <acidjazz> how do i check if a key iexists in a keypair in a collection
[00:04:47] <acidjazz> like collection: { bobs: { this: that, far: side } }
[00:04:51] <acidjazz> how do i check for this or far
[00:06:14] <TkTech> acidjazz: $or and $exists.
[00:26:10] <acidjazz> TkTech: notice i specified hte key not the value
[00:26:31] <TkTech> acidjazz: Notice my answer does not change.
[00:28:15] <acidjazz> what if i had a value in there the same name as a key
[00:28:31] <acidjazz> like if name: was a key
[00:28:34] <acidjazz> but some guys name was name
[00:28:39] <acidjazz> ..
[00:28:45] <acidjazz> could never search for him
[00:28:45] <TkTech> Go read the g'damn documentation.
[01:16:11] <Almindor> hello again
[01:17:14] <Almindor> our replica set restore is taking too much time and resources, is it possible to just copy files from the master (the whole directory contents) when mongod is turned off and sync this way or would the metadata prevent this from working? (I noticed this method is mentioned on the web)
[01:17:28] <Almindor> I wasn't sure what "all node data" means if all files in the node dir, or just some
[02:51:00] <Almindor> can someone confirm if copying all node data (that is all contents of the node's data dir) to a replica node is the same as doing a full resync? (when mongod is off of course)
[04:51:43] <adiabatic> I have a document that looks like {'quests': {'kill_ten_rats': {'status': 'unstarted', 'progress_max': 10}}}. How can I change the status of the kill-ten-rats quest to 'started' without accidentally erasing the progress_max key/value pair?
[04:58:03] <crudson> adiabatic: update with something like {$set:{'quests.kill_ten_rats.status':'completed'}}
[04:59:14] <adiabatic> ok, so “status” needs to be after a ., not in a subdict (I'm using Python)
[05:00:08] <crudson> adiabatic: the key points are 1) using $set so the rest of the document is unaffected 2) using dot notation to reference and embedded attribute
[05:00:19] <crudson> s/and/an/
[05:00:30] <adiabatic> ah, thanks!
[07:37:55] <[AD]Turbo> hi there
[08:49:59] <stefancrs> uhm, by accident I created a collection called "Vatten & Avlopp"
[08:50:06] <stefancrs> how do I reference it so I can drop it?
[08:50:21] <stefancrs> (in mongo shell)
[08:52:17] <NodeX> db.foo.drop
[08:52:20] <NodeX> db.foo.drop();
[08:53:00] <NodeX> but as it has spaces you probably want .. var foo = 'Vatten & Avlopp'; db.foo.drop()
[08:53:12] <stefancrs> ah ok, thanks
[08:53:47] <NodeX> (untested)
[08:55:10] <stefancrs> hm, no luck
[08:55:14] <stefancrs> > var mycoll = "Vatten & Avlopp"; db.mycoll.drop(); false > db.getCollectionNames(); [ "Vatten & Avlopp", "accounts", "issues", "system.indexes" ]
[08:55:20] <Bartzy> Hello
[08:55:26] <Bartzy> How Pooling works in the drivers ?
[08:55:41] <Bartzy> I mean, if it's not per request - Where the connections are saved, where are they initiated ?
[08:55:52] <Bartzy> And if it's per request - we still create a new connection per page load - so what's the point ?
[08:56:51] <NodeX> you can persist your connections
[08:57:07] <Bartzy> How does it work ?
[08:57:24] <Bartzy> where are they persisted exactly? Which process allocates the memory for them ?
[08:57:56] <NodeX> read the docs
[08:58:15] <NodeX> and they work by assigning an identifier - i/e a session id or something
[08:59:45] <Bartzy> so the connection is saved on the mongodb server - and then the clients only need to open a TCP socket and ask for a connection ?
[09:00:04] <Bartzy> NodeX: Can you provide a link to the docs? I didn't find anything that explains how it works there.
[09:01:16] <NodeX> pass, I really don't care for the internals as they don't effect my apps ergo I don't know where the docs are on how mongodb handles memory with connections
[09:02:35] <Bartzy> NodeX: But where the connection is stored? :\
[09:02:58] <NodeX> ^^
[09:03:06] <NodeX> what does it matter?
[09:04:20] <Bartzy> NodeX: Ah. The PHP-FPM processes don't die after each request
[09:04:22] <Bartzy> they serve many requests
[09:04:26] <Bartzy> so the connection persists
[09:04:27] <Bartzy> cool.
[09:04:31] <stefancrs> :)
[09:14:50] <stefancrs> NodeX: ended up doing it via a web ui :)
[09:15:56] <NodeX> oki doki
[10:55:46] <PDani> hi
[11:47:56] <atul> hi, can anyone help. i am trying to use casbah with scala 2.9.1 , what should be the sbt dependecy...
[11:47:58] <atul> thank you
[11:48:24] <atul> i am trying this "com.mongodb.casbah" %% "casbah" % "2.1.5.0"
[11:48:36] <atul> but not working
[12:18:24] <zakg> is there no support for mongo in SUSE?
[12:23:07] <Tankado> Is mongoDB supported for power pc linux ?
[12:25:23] <Tankado> anyone? :)
[12:26:02] <cenuij> zakg: http://software.opensuse.org/package/mongodb
[12:26:57] <cenuij> zakg: if there's something missing, patch the package yourself via your open build service account and submitrequest it back to the development project ;)
[12:40:19] <algernon> ppc is BE, mongodb - as far as I remember - runs only on LE architectures.
[12:40:34] <algernon> Tankado: ^
[12:40:46] <Tankado> thanks :(
[12:41:09] <algernon> though, there are clients that run on ppc, but not the server
[12:47:12] <zakg> how do i start the mongo server in suse?
[13:18:39] <zakg> NodeX:how do i start mongo server in SUSE?
[13:19:07] <ron> zakg: rtfm. seriously.
[13:21:22] <NodeX> pass, I dont use SUSE
[13:36:15] <Datapata> Hiya. I have massive problems with my replica set. I have one node on AWS and two in the office, one of the machines in the office is trying to sync and the other machine in the office keeps dying. Since sync uses the closest machine to sync it uses the dying machine.
[13:36:19] <Datapata> Argh.
[13:37:09] <Datapata> I have a mongodump from yesterday witch I took with --oplog, can I use that on the node that needs to be resynced and then let it joing the replica-set?
[13:37:34] <Datapata> This mongodump was taken on the machine on AWS.
[13:55:57] <stefancrs> morning
[13:57:56] <stefancrs> when implementing a "full text search" in mongodb I usually split any stored text up into its separate words and store those as search keywords. but if one also want to be able to search with quotation marks, like "word1 word2" and only get hits with word2 following word1 in the stored text, how would one go about doing that?
[13:58:10] <stefancrs> for any given "quoted sentence" that is :)
[13:59:30] <ron> by using an external index engine and not using mongodb for FTS.
[13:59:49] <kali> you can store both the split words and the actual text. on a query, you filter the result given by the words by looking for the right sequence on the text
[14:00:26] <kali> but... if you want something serious, external engine is the way to go
[14:00:58] <stefancrs> ok :)
[14:01:04] <stefancrs> in this specific case
[14:01:11] <stefancrs> I will probably just settle for keyword search instead
[14:01:32] <stefancrs> it's not a deal-breaker
[14:01:42] <stefancrs> basically it's a search in an issue tracker
[14:06:04] <NodeX> stefancrs : you can also use $in to search your separate words if you store them as an array
[14:06:58] <NodeX> field : { $in : ['word1','word2']
[14:08:41] <stefancrs> NodeX: yeah
[14:08:57] <stefancrs> NodeX: or $all if I want to do a logical AND
[14:09:37] <stefancrs> which probably is the more common case
[14:09:48] <stefancrs> (in this scenario)
[14:10:23] <NodeX> dont expect the performance to be good
[14:10:30] <NodeX> especialy if uing regex
[14:10:33] <NodeX> using *
[14:12:41] <stefancrs> no regex will be used
[14:13:15] <stefancrs> or maybe just ^keyword
[14:13:17] <stefancrs> so to speak
[14:13:35] <stefancrs> but I will make sure that the query will be indexed
[14:13:46] <NodeX> always use a prefix
[14:13:46] <stefancrs> uhm, will be using an index
[14:13:52] <stefancrs> always? why?
[14:13:57] <NodeX> else ti wont use an index
[14:13:59] <NodeX> it *
[14:14:04] <stefancrs> huh
[14:14:14] <kali> ^blah is indexable, not blah
[14:14:21] <stefancrs> w00t
[14:14:24] <stefancrs> new to me
[14:14:29] <stefancrs> sounds totally... weird.
[14:14:39] <stefancrs> why would searching for an exact string match not be indexed?
[14:14:54] <NodeX> you misunderstand
[14:14:55] <kali> ha. sorry. /^blah/ is indexable, not /blah/
[14:15:01] <NodeX> using a prefix will HIT an index
[14:15:06] <NodeX> not using one wont
[14:15:23] <kali> and "blah" is also indexable
[14:15:26] <stefancrs> you're talking exclusively about when using a regex, right?
[14:15:48] <stefancrs> kali: phew
[14:16:22] <NodeX> they all index ... with regex using a prefix it will use the index, else it wont
[14:17:10] <stefancrs> NodeX: yeah, which is why "always use a prefix" caught me off guard, since it was after I said "no regex will eb used, or maybe just ^keywrd" :)
[14:18:03] <akaIDIOT> the docs has a nice page on sharding limits
[14:18:30] <akaIDIOT> query speed aside, is there a practical limit to the amount of sharded data ?
[14:19:07] <akaIDIOT> i remember one of the 10gen guys mentioning something about the mongos process needing all keys in memory, creating a limit on the total size of things
[14:19:12] <akaIDIOT> but cant find that anymore
[14:19:27] <akaIDIOT> so was wondering if that goes per db or per collection or per cluster
[14:22:22] <akaIDIOT> last two comments on the wiki page indicate the limit is per collection, though does not point to any resource stating why and the like
[14:32:30] <paultag> Hey mongo'ers. I'm interested to know if I can search by finding a subset of a given array. Let's say I have a "capabilities" array for nodes, and in a document, it has a "needed_capabilities" (I'm just making this up as I go on, work with me here) -- is there any way to search for all documents who's required_capabilities are a subset of capabilities (array) ?
[14:32:56] <paultag> $in is wrong for this, as it $and
[14:33:02] <paultag> I'm wonding if I'm missing something
[14:33:10] <paultag> wondering*
[14:35:20] <NodeX> can you explain it differently?
[14:35:57] <paultag> It's a tough thing to explain. Let's see here.
[14:37:24] <paultag> If I had a build-farm and I wanted to test based on constraints - let's say my builders have a capabilities array -- [ 'gcc', 'debian', 'sid' ] -- and a build job -- needed_capabilities: [ 'gcc' ] & another with [ 'clang'] -- how can (given the first array, of the builder) I query for all jobs that have a subset of my array (e.g. return gcc, but not clang)
[14:37:29] <paultag> NodeX: ^
[14:37:37] <paultag> let me write some python to explain this better. Sorry, this isn't my strong suit.
[14:37:49] <NodeX> I dont know python soryr
[14:37:53] <NodeX> sorry*
[14:38:38] <NodeX> I thnk you need $elemMatch if I am understanding you correctly
[14:38:38] <NodeX> http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24elemMatch
[14:40:07] <paultag> I'd just like to make sure [ 'joe', 'bob' ] matches ['joe'], ['bob'], ['joe', 'bob'], ['bob', 'joe'], but not ['joe', 'jose'] or ['billy', 'rex']
[14:40:10] <paultag> NodeX: ^
[14:40:24] <paultag> elemMatch looks to be close-ish, but a bit off
[14:40:37] <paultag> it's most similar to $and, but backwards
[14:40:58] <paultag> wait, sorry, I misread that. It's not like $and
[14:41:11] <NodeX> it requires all are matched iirc
[14:41:29] <NodeX> "Note that a single array element must match all the criteria specified"
[14:42:39] <paultag> NodeX: how would you construct the find spec? I'm still a bit lost, I'm new to Mongo.
[14:42:50] <paultag> I'm not seeing it yet
[14:53:34] <paultag> NodeX: Still around? I could use some help with this, if you could spare some moments for me.
[14:54:22] <NodeX> 1 sec
[14:54:26] <paultag> sure.
[14:55:31] <NodeX> a couple of ways, the most effectinve would be to have a key value
[14:56:51] <NodeX> bleh: [{name:"baz",name:"foo",name:"bar"}] .. then you can do foo.find({"bleh.name":"baz","bleh.name":"foo"});
[14:56:56] <paultag> Well, the thing is that the array will be dynamic, I don't know this before the fact
[14:58:08] <paultag> and also, that's not exactly the right logic - I want to define the superset and find all records with a subset or matching set
[14:58:15] <NodeX> if you can garuntee the order then you can use .... bleh: ['foo','bar','baz'] ... {'bleh.0':foo, 'bleh.1':bar}
[14:58:17] <paultag> not define a subset
[14:58:36] <paultag> since that'd allow ['foo', 'bar'] to match ['foo', 'bar', 'baz']
[14:59:02] <NodeX> err, 1 sec I answered this on stackoverflow a few months back
[14:59:09] <paultag> great :)
[15:00:19] <NodeX> can't remember my login though, hold on!
[15:00:32] <paultag> np! I'm here all day :)
[15:02:48] <NodeX> http://stackoverflow.com/questions/6165121/mongodb-query-an-array-for-an-exact-element-match-but-may-be-out-of-order/7938453#comment9706014_7938453
[15:02:58] <NodeX> thank firefox history for that one!
[15:04:03] <NodeX> looks like that's exactly what you want
[15:05:03] <paultag> NodeX: thanks! Let me take a look here :)
[15:05:36] <NodeX> basicaly you query with the array you want
[15:06:43] <paultag> Ah, yeah, I see. I'd like to have defined users: [ '2' ] & gotten back all things with users: '2' in it, as with [ '2', '3' ] getting back all with both 2 and 3 in there
[15:06:52] <paultag> NodeX: it's very close, but slightly off :(
[15:06:59] <NodeX> how?
[15:07:12] <paultag> I'd like to have defined users: [ '2' ] & gotten back all things with users: '2' in it, as with [ '2', '3' ] getting back all with both 2 and 3 in there
[15:07:15] <paultag> :)
[15:07:36] <paultag> I get the feeling Mongo can't handle this sort of expression, and I should do this outside mongo
[15:07:37] <NodeX> that's a contradiction to what you've said already
[15:07:48] <NodeX> [15:58:24] <paultag> since that'd allow ['foo', 'bar'] to match ['foo', 'bar', 'baz']
[15:07:56] <NodeX> but now you want it to match?
[15:08:04] <paultag> erm, crap, now I'm confusing myself.
[15:08:10] <NodeX> lol
[15:08:23] <paultag> ok, past me was right :)
[15:08:27] <paultag> ohhh, I see.
[15:08:34] <paultag> NodeX: I think this is it. Thanks!
[15:08:47] <NodeX> if you want where 2 exists in any array use foo:2
[15:08:57] <NodeX> if you want an array to match use foo : [1,2]
[15:09:08] <NodeX> that will only match foo : [1,2]
[15:09:17] <NodeX> and not foo:[1,2,3]
[15:09:18] <paultag> oh, but not [1] or [2]
[15:09:25] <paultag> or has to be [1,2]
[15:09:35] <NodeX> for 1 or 2 use $in
[15:09:43] <paultag> but N length
[15:09:47] <NodeX> foo : { $in : [1,2] }
[15:09:52] <paultag> Oh.
[15:09:52] <jarrod> i love mongo
[15:09:58] <NodeX> thats (SQL) where foo = 1 or foo = 2
[15:10:13] <NodeX> or SQL foo IN(1,2)
[15:10:22] <NodeX> (more appropriate)!!
[15:10:25] <paultag> NodeX: OK, I need to sit down with this. I think you've given me enough to work with :)
[15:10:28] <paultag> NodeX: thanks a lot! :)
[15:10:31] <NodeX> no probs
[15:10:36] <NodeX> come back if you get stuck
[15:10:41] <paultag> sure enough!
[15:26:50] <Almindor> could someone please confirm if it's possible to restore a replica node by deleting all it's data and then copying all the data from working master node (including journal etc. all in the folder)?
[15:27:11] <Almindor> I tried a full resync by deleting the node but it's taking ages and I think it actually stalled out of memory o.O
[15:30:21] <kali> Almindor: http://www.mongodb.org/display/DOCS/Resyncing+a+Very+Stale+Replica+Set+Member, last chapter, second option
[15:31:27] <Almindor> kali yes but "data files"
[15:31:37] <Almindor> kali: does it mean all the files in the node dir or not?
[15:33:44] <kali> Almindor: ha. i don't know for sure
[15:37:09] <Almindor> see, if I knew that I could do it yesterday ;)
[15:37:26] <Almindor> I just don't want to kill this stalled resync when I don't know if method #2 will work
[15:38:33] <Almindor> is it normal that a resync of ~300gb can eat 8gb or RAM and 16gb of swap and just stall?
[15:40:12] <kali> what does the resyncing slave says in its logs ?
[15:41:26] <Almindor> kali: it's doing this % thing but it's stuck on 94% and I see about only half the files in the slave node dir compared to master
[15:41:33] <Almindor> otherwise it's just getting connections etc.
[15:42:19] <kali> it is probably building an index
[15:42:20] <Almindor> at first I saw nice progress when it was saving chunks
[15:42:36] <Almindor> hmm so index building probably ran out of space
[15:42:42] <kali> might be
[15:42:52] <Almindor> well we got a huge number of documents in 2 collections
[15:42:57] <Almindor> and they have some indexing
[15:42:58] <kali> i'm not sure it is a very good idea to have swap on a server with mongodb
[15:43:29] <Almindor> I think there's no point in letting it hang the server now tho it had the whole night and didn't move anywhere
[15:43:50] <jgornick> Hey guys, I'm trying to use map/reduce to produce a count of documents that will be uncategorized if I remove a category and all it's child categories. I have a sample dataset of my category collection found https://gist.github.com/c1dc2da8fdfb89a9ee69 Any help would be greatly appreciated!
[15:59:25] <niram> hi, i have a question about MongoGridFS::storeBytes in php driver (http://php.net/manual/en/mongogridfs.storebytes.php)
[16:00:23] <niram> what exactly does the "safe" option do? Does it ensure it was writen to the journal (as it should) or does it do fsync (which would sucks)
[16:34:03] <Almindor> just for others info it works if you copy everything from one replset node to another to resync
[16:34:16] <Almindor> and it's much much much faster and less intensive than normal resync
[17:49:21] <EricL> Any chance Tad Marshall is here?
[17:52:32] <drummerp> Hi, I'm having trouble using the Java connector for MongoDB. I'm using a function which retrieves a single document by the field objectId, as you can see here: http://pastebin.com/vM3BBLwn I have already connected to the database and initialised the `collection` variable. The issue is that when I call cursor.count(), I get an IllegalArgumentException at runtime with the message "'ok' should never be null..."
[17:52:40] <drummerp> Any ideas what's happening?
[18:06:24] <drummerp> I apologize, would it be better for you if I provided an SSCCE instead of just a code sample from a single function?
[18:06:29] <alexyz> anyone has a pointer/link to an explanation of why *mongos* role isn't part of a normal mongo server (why does it need to be a separated process)?
[18:14:52] <Vile1> Hi, does the sharding and replicaset lives together? I want to get something like http://vile.su/pics/is-it-20120802-191338.png that, but not sure if that is how it actually works
[18:15:52] <jY> Vile1: sharding lives above replicatset
[18:16:01] <jY> each member of a shared is a replicaset
[18:16:05] <jY> shard*
[18:16:19] <Vile1> i.e each chard will have to replicate on its own
[18:16:40] <jY> yes in case a shard member dies
[18:16:56] <gshipley> but you can do sharding with having each shard setup as a replica set?
[18:17:00] <Vile1> makes sense
[18:17:03] <gshipley> with = without
[18:17:15] <jY> you can if you hate your data :)
[18:17:21] <gshipley> jY: and I do. :)
[18:18:00] <Vile1> Is there a way then to minimize synchronization traffic?
[18:18:18] <jY> between replicaset members?
[18:18:40] <Vile1> I want to place servers into different DC's
[18:19:38] <jY> have you read this http://www.mongodb.org/display/DOCS/Data+Center+Awareness
[18:20:35] <jY> i don't think there is anyway to compress the data per say between datacenters
[18:21:30] <drummerp> This is very strange. I just created an SSCCE from my project, and the exception went away, and it properly retrieved the document and printed it accurately.
[18:21:44] <drummerp> I suppose it must be something to do with the way I'm integrating it with Spring.
[18:27:06] <Vile1> jY: do you know if mongo compresses the transferred data? If not, then I can probably introduce compression on VPN between DCs
[18:28:04] <jY> Vile1: no idea
[18:38:13] <thedahv> Howdy. I have a question about a good strategy to search on "full name" fields on a document if I'm currently storing "first_name" and "last_name" separately
[18:38:45] <thedahv> I'm using mongodb through the mongoose library. It's easy enough to define a virtual field so my application-level stuff can get the full_name without much fuss
[18:39:09] <thedahv> But fuzzy searching through documents by a full name search query is tough
[18:39:17] <thedahv> Should I just persist a full_name field?
[18:39:53] <drummerp> I believe I may have found the issue. I believe it to be a scope issue relating to Spring MVC which I have overlooked.
[18:40:32] <aheckmann> thedahv: you could, or add keywords to your docs and search those
[18:41:02] <thedahv> aheckmann: like a list of possible search terms?
[18:41:52] <aheckmann> thedahv: https://npmjs.org/package/mongoose-keywordize
[18:41:58] <aheckmann> thedahv: yeah
[18:42:25] <thedahv> That does sound nice. I'm not yet at the point where there would be another keyword search other than full_name
[18:42:42] <thedahv> Separating the names made a lot of sense when I want to do things like sorting
[18:42:51] <aheckmann> thedahv: right
[18:42:57] <thedahv> But at the user-level, they don't care about that. They just want to start typing and get the result they want
[18:43:24] <thedahv> Oooh, but I'm reading this library now
[18:43:35] <aheckmann> thedahv: so that module lets you specify which document fields to keywordize, e.g. auto-add to the keywords array when saving etc
[18:43:41] <ron> ton
[18:43:45] <thedahv> that's kind of sexy
[18:43:47] <thedahv> reading….brb
[18:45:11] <thedahv> Ok, so far it looks good
[18:45:20] <thedahv> How would I search through that though?
[18:45:49] <thedahv> people.find({keywords: { $contains: queryString }}) ??
[18:46:24] <thedahv> Err, $in
[18:46:25] <thedahv> rather
[18:47:32] <aheckmann> thedahv: the same way you search anything else in mongo. you can treat arrays like normal props, find({ keywords: /name/ })
[18:48:24] <aheckmann> thedahv: find({ keywords: { $in: [some, array] }})
[18:48:27] <thedahv> So if my query is something like "George Washington" and keywords is ['first_name', 'last_name'], it will match?
[18:48:41] <thedahv> Well, I guess I can just give it a shot and see :)
[18:49:22] <aheckmann> thedahv: yeah playing around with it quick (take a peek at the tests) will show you more quickly than i can type
[18:50:14] <thedahv> Cool. Thanks for the pointers. I'll report back when I get something working
[19:23:33] <houms> i have installed mongo on 3 servers and one of them i cannot access the web interface. even though lsof shows the port is listening. it seems it is listening on localhost instead of the LAN ip
[19:25:01] <slava_> so tell mongod to listen on all interfaces
[19:25:14] <slava_> or the whatever IP you need
[19:28:25] <houms> slava what i am not clear on is the servers were setup the exact same way using the same mongod conf file which does not define bind_ip. just not sure why this one server is doing that. thought i missed a step somewhere
[19:28:30] <houms> thanks for your reply
[19:31:17] <houms> how can i completely remove repl set?
[19:37:43] <jonwage> my secondaries replication is stopped due to this error "errmsg" : "syncTail: 10142 Cannot apply $pull/$pullAll modifier to non-array, syncing: { ts: Timestamp 1343931549000|54, h: 6544226030691754726, op: \"u\", ns: \"opensky_devo.identities\", o2: { _id: ObjectId('501ac24c1c0ba0563500007d') }, o: { $pull: { cart.quoteItems:"
[19:37:47] <jonwage> does anyone know how to recover?
[19:57:26] <thedahv> aheckmann: So that did yield some progress
[19:57:49] <thedahv> Waiting until after a lunch break usually makes things go that way :)
[19:58:14] <thedahv> So the reason I started exploring this is I'm implementing a typeahead search on an input field
[19:59:01] <nofxx> Just wondering.. is it normal a 'local.oplog.rs' of > 1200ms ? All my collections show there with 3ms, 50ms tops...
[19:59:10] <thedahv> This keyword search approach works, but only when the entire first name is entered
[19:59:17] <thedahv> So, not exactly what I'm shooting for...
[19:59:17] <nofxx> on mongotop* http://pastebin.com/2BqzX1GY
[19:59:21] <thedahv> But I have another idea...
[20:20:11] <elux> hi
[20:20:40] <elux> im trying to figure out how to query if a field is an empty array .. any suggestions?
[20:20:51] <elux> nevermind..
[20:20:54] <elux> $size 0 ..
[20:34:36] <thedahv> aheckmann: I got a decent working solution for implementing typeahead searching on a 'full name' field
[20:35:04] <thedahv> Using the custom function on that keywordize library, I was able to get the union of the character arrays from the first_name and last_name fields
[20:35:45] <aheckmann> thedahv: nice!
[20:35:51] <thedahv> Then in my query, I can do something like "Person.where('keywords').in(searchChars)"
[20:36:13] <thedahv> Yeah, thanks again for the tip
[20:36:19] <thedahv> I think my users are really going to enjoy this
[20:38:33] <aheckmann> thedahv: good to hear
[20:49:35] <wingy> is there a way to get documents with linked documents in one query?
[20:59:16] <jn> hi, is there a way to get better formatting from the mongo shell? this isn't working for me http://xn--bl-wia.se/bd23f5ed98.png
[20:59:52] <grallan> wingy: I don't think there is from the command line. If you use an ORM, there might be one built for you. Like in mongoose for node.
[20:59:58] <grallan> jn: .pretty()
[21:00:09] <wingy> grallan: ok
[21:00:16] <jn> grallan: thx
[21:03:29] <aheckmann> jn: add "DBQuery.prototype._prettyShell = true" to your .mongorc.js file to have it pretty by default
[21:04:10] <MANCHUCK> aheckmann is that file created in windows client as well ?
[21:04:38] <aheckmann> MANCHUCK: you create it your self. mongo picks it up if present. not sure about win but should be same AFAIK
[21:04:52] <MANCHUCK> ok thanks
[21:05:31] <jn> aheckmann: even better, thanks
[22:12:52] <remonvv> Anyone aware of an eta for 2.2.0-rc1?
[22:37:09] <chubz> Is there a way I can find out which process id is being used by whichever mongo node?
[23:32:01] <rossdm> any way to track progress of db.copyDatabase()? It just seems to hang with no apparent network traffic