PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 10th of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:40:25] <node2js501> why should i choose mongoDB over postgreSQL ?
[00:41:03] <node2js501> maybe even for querying
[00:47:30] <stefandxm> node2js501, maybe you are gay
[00:47:45] <stefandxm> node2js501, or even worse you are heterosexual
[00:47:51] <node2js501> lol
[00:48:03] <node2js501> im neither but give me an honest answer
[00:48:15] <stefandxm> is she hot?
[00:48:30] <stefandxm> and if not; or well. fuck it
[00:48:33] <stefandxm> i really fuck it
[00:48:39] <stefandxm> its a female 8)
[00:48:44] <stefandxm> how do you probe her?
[00:49:29] <cheeser> enough
[00:49:41] <stefandxm> i agree
[00:50:24] <cheeser> that kind of stuff doesn't belong here
[00:50:54] <stefandxm> iam sorry. i forgot about the genus perspective in my code
[01:02:30] <joannac> stefandxm: behave yourself please
[01:02:36] <node2js501> Thanks guys
[01:04:50] <stefandxm> joannac, hey. said iam sorry :)
[01:05:46] <node2js501> im using relational and unrelational data
[01:05:53] <node2js501> but want to use mongo
[01:06:18] <node2js501> some data is not strongly typed
[01:06:22] <node2js501> so it needs to be parsed
[02:15:56] <stefandxm> joannac, exactly what are the policies here?
[05:08:05] <stefandxm> joannac, kinda like you
[05:45:24] <spencercarnage> I have a question about the mongoose.model().find.
[05:46:18] <spencercarnage> It’s been awhile since I’ve used mongoose, but I recall than when I did a find on a model, it returned all of the fields of the schema. Now, I’m only getting a ‘name’ field, along with _id and __v.
[05:47:14] <spencercarnage> The model was generated from a yeoman generator which created the model with only name on the schema. I updated the schema with additional fields but I can get them to show up when I use find.
[06:00:46] <spencercarnage> nevermind. I’m dumb. new results have the new fields.
[06:00:51] <spencercarnage> duh.
[06:58:08] <BigOrangeSU> Hi all, wondering about getting some info about using a mongo shell version that is much older then server version. What are the implications?
[07:02:43] <joannac> won't support some of the new shell helpers?
[07:03:33] <joannac> how old are we talking? a 2.2 shell might have trouble with inserting users into a 2.6 mongod
[08:29:41] <jasvir> hello all. I am seeking for a demo application using mongodb and django. Do anyone have some recomendations?
[08:40:22] <nfroidure_> Does the $addToSet operator takes more than one filed ? Sometinhg like $addToSet: ['$user', '$owner'] ?
[08:40:29] <nfroidure_> *field
[10:00:53] <rspijker> nfroidure_: do you want to add multiple things to a single field? Or do you want to add a single thing to multiple fields?
[10:01:09] <rspijker> because the addToSet in your example doesn’t make that much sense to me
[10:15:51] <nfroidure_> rspijker, i have two fields containing similar concepts, i'd like to select distinct rows depending on those 2 fields combinated.
[10:17:03] <nfroidure_> put it in a set is the first step in my current attempt to achieve this
[10:18:25] <nfroidure_> i wanted to group on id + put field1 and field2 in a set, unwind on this set basis and then group on the unwinded value
[10:19:51] <rspijker> why don’t you just group on them directly?
[10:20:01] <rspijker> _id can be a document
[10:20:36] <rspijker> $group:{“_id”:{f1:”$field1, f2:”$field2”}}
[10:20:58] <nfroidure_> if i group them directly, i'll potentially have the same value twice
[10:21:05] <nfroidure_> that's what i want to avoid
[10:22:21] <rspijker> I’m not really sure how… but ok.
[10:22:37] <rspijker> if you want to add multiple source fields in a single field, you can use the $each modifier
[10:22:44] <rspijker> it’s documented on the $addToSet operator
[10:26:08] <yruss972> Can someone take a look at this output from mongostat: http://pastebin.com/S6hx7U4D
[10:26:58] <yruss972> Our servers are showing mongodb using massive amounts of swap but really, we have very small databases with relatively little activity :?
[10:46:11] <nfroidure_> rspijker, thanks for the tip
[10:56:12] <kali> yruss972: http://docs.mongodb.org/manual/faq/storage/
[10:58:58] <yruss972> kali: I think I understand that but on disk, our db directory is only 5.6GB
[10:59:29] <yruss972> pmap shows 20GB of mapped memory
[10:59:38] <yruss972> prstat shows 109GB
[11:01:32] <kali> yruss972: the oplog has to be mapped too, and the actual data is mapped twice
[11:01:49] <kali> but that still under 20GB, not speaking of 120GB
[11:02:11] <yruss972> restarting the process brings the memory usage down to 18-20GB but I'm not sure for how long :?
[11:02:33] <kali> 18GB does not sound unreasonable...
[11:02:47] <kali> and that's what mongostats says
[11:02:49] <yruss972> 14423 mongodb 110G 4913M sleep 59 0 89:23:01 0.9% mongod/112
[11:03:54] <yruss972> http://pastebin.com/a2HfSfWb
[11:04:06] <yruss972> new paste from a server that hasn't been restarted
[11:04:38] <kali> mmmm.... are you creating/deleting databases as a routine op ?
[11:05:02] <yruss972> no- not that I'm aware of
[11:05:17] <yruss972> I'm not actually writing the code
[11:05:33] <rspijker> are you sure your db dir is only 5.6GB? :s
[11:06:25] <yruss972> du -sh /var/mongodb/ -> 6.0G /var/mongodb/
[11:11:29] <yruss972> we are on v2.4.9 - is there some known memory issue to be aware of?
[11:11:39] <rspijker> the difference between mapped and vsize is kind of weird...
[11:11:48] <rspijker> do you have obscene amounts of connections?
[11:12:09] <yruss972> not at all- you can see in the mongostat output- 55 connections
[11:14:13] <kali> yruss972: this is a linux box, right ?
[11:14:21] <kali> yruss972: or something more exotic ?
[11:14:23] <yruss972> kali: SmartOS
[11:14:25] <rspijker> what does your pmap output look like?
[11:14:46] <kali> ok, i would consider this as exotic :)
[11:14:59] <yruss972> http://pastebin.com/yEtZg1XJ
[11:15:05] <yruss972> pmap output
[11:15:14] <rspijker> fairly exotic, yes ^^
[11:16:45] <rspijker> what amounts to like 20GB (as per the total line)
[11:17:25] <rspijker> so… how sure are you that the figure under vsize is actually accurate?
[11:17:33] <rspijker> and… is it a problem that it’s karge?
[11:17:35] <rspijker> *large
[11:17:43] <rspijker> as in, are you seeing negative consequences?
[11:18:16] <yruss972> We've been having issues with other processes on the box
[11:18:28] <yruss972> our monitoring agents stop responding
[11:18:30] <yruss972> etc
[11:18:41] <rspijker> is it swapping?
[11:18:51] <yruss972> doesn't appear to be swapping
[11:19:14] <rspijker> then are you sure the large vsize is causing the issues?
[11:19:35] <rspijker> because vsize is not really limited, 120GB of virtual memory is nothing, in the grand scheme of things
[11:19:44] <r1pp3rj4ck> hey guys
[11:20:13] <yruss972> no- I'm not sure it's causing the problems, just it is the most unnatural thing I've found on the box
[11:20:20] <yruss972> we have really small dbs
[11:20:38] <yruss972> can't imagine a good reason to us so much memory
[11:20:39] <r1pp3rj4ck> i sent a mail to the users list https://groups.google.com/forum/#!topic/mongodb-user/r2ru3Mv6HJo
[11:20:52] <r1pp3rj4ck> and i figured i could do some benchmarking myself too
[11:20:54] <kali> yruss972: it's not memory, it's adress space
[11:21:36] <yruss972> kali: but you agree that the numbers are wierd?
[11:21:54] <kali> yruss972: but yeah. so figures seems atypically high. chances are smartos instrumentation of mmap is different from the more mainstream kernels
[11:22:23] <kali> yruss972: mmap implementation itself can differ for all i know
[11:22:23] <r1pp3rj4ck> this is what i have now for the bench https://gist.github.com/36c920b44d93603e0195
[11:22:50] <yruss972> In the time since the last paste- the process has reached 111G :/
[11:23:02] <r1pp3rj4ck> and it sometimes prints what it's needed to be printed, but sometimes it throws this error 2014-07-10T13:20:37.595+0200 error hasNext: false at src/mongo/shell/query.js:127
[11:23:15] <yruss972> 1G / 20minutes
[11:23:16] <r1pp3rj4ck> what am i missing, guys?
[11:24:34] <kali> r1pp3rj4ck: first, you need an index on "rand" alone for this query
[11:25:24] <r1pp3rj4ck> kali, right, thanks
[11:25:35] <r1pp3rj4ck> but i assume it's not what causes this error
[11:28:38] <kali> r1pp3rj4ck: i agree, but i don't see anything fishy except this: when you ask for a sort on a big chunk of data (iirc, 1000 documents) with no matching index, bad things happen
[11:28:49] <kali> r1pp3rj4ck: sorry i can't be more specific here
[11:28:58] <kali> r1pp3rj4ck: try it with the index
[11:28:59] <kali> :)
[11:29:22] <rspijker> the error looks like it’s due to you doing a next on a cursor that has no next
[11:29:27] <kali> {"rand":1} or { rand:1, val:1 } is what you need
[11:29:34] <rspijker> and that’s something that can happen, because your script is super weird
[11:29:56] <rspijker> you’re keeping a cursor and checking “if it doesn’t have anything new, do a new search”
[11:30:25] <kali> rspijker: yes, that's because in some cases, it may pick a random value higher than anything in the collection
[11:30:46] <kali> rspijker: that bit i understand
[11:30:55] <r1pp3rj4ck> yup, that's why it is there
[11:31:09] <rspijker> ah, I see
[11:31:32] <kali> r1pp3rj4ck: you're aware this random technique introduce a bias, btw ?
[11:31:40] <r1pp3rj4ck> kali, i fixed the index, but it still behaves the same way :/
[11:31:44] <r1pp3rj4ck> what do you mean?
[11:32:28] <kali> r1pp3rj4ck: all the documents of the collection do not have the same probability to appear
[11:32:42] <r1pp3rj4ck> yup, i know that
[11:32:44] <kali> ok
[11:32:54] <kali> r1pp3rj4ck: let me try the script here
[11:33:11] <r1pp3rj4ck> that's why i'm trying to find some good trade-off between performance and even distribution
[11:33:27] <r1pp3rj4ck> i described it in the email
[11:33:37] <rspijker> hasNext needs parentheses?
[11:33:48] <rspijker> you’re comparing a function to a boolean right now
[11:33:54] <r1pp3rj4ck> in this script i'm trying to find the "magic number" which works for me
[11:33:59] <kali> rspijker: indeed
[11:34:10] <r1pp3rj4ck> ahh right, thanks rspijker
[11:34:14] <r1pp3rj4ck> now it's all right :)
[11:34:17] <kali> isn't javascript a wonderful language ?
[11:34:25] <kali> rspijker: nice catch
[11:34:29] <rspijker> also, doing boolean==false gives me sore eye :P
[11:34:35] <rspijker> *eyes
[11:34:44] <rspijker> but I think there may be cases in javascript where it’s needed
[11:34:45] <r1pp3rj4ck> yeah well that was for debugging
[11:34:48] <rspijker> so I’ll let it slide
[11:34:59] <r1pp3rj4ck> i'm actually more of a java/php guy myself, don't really work with js
[11:35:05] <r1pp3rj4ck> but i know it's reeeaaaally weird sometimes
[11:35:08] <rspijker> iirc, js has some weird conventions converting stuff to booleans and return values
[11:35:15] <r1pp3rj4ck> so i tried some weird stuff :P
[11:35:35] <rspijker> but every time I see someone comparing something that should return a boolean explicitly to true or false, I die a little inside
[11:36:54] <r1pp3rj4ck> rspijker, i know it's too late to say this, but i actually never do stuff like that, this was a really rare exception :)
[11:37:27] <r1pp3rj4ck> my personal favorite is when someone codes if (something == true) { return true; } else { return false; }
[11:37:33] <r1pp3rj4ck> brb
[11:37:43] <rspijker> haha, talk about tell-tale signs :)
[11:44:00] <sweb> is it safe use _id as fiead for creation time ?
[11:44:07] <sweb> field*
[11:49:33] <kali> sweb: if you're not doing anything smart when generating them, if your servers are on time, and if you don't care about the one second granularity, it's safe
[12:02:49] <Industrial> Say I'm receiving a stream of millisecond accurate sensor data and I don't know upfront for each message which collection I put it in
[12:03:08] <Industrial> doing a db.collection(SOMEVAR).insert(data) A LOT of times per second
[12:03:10] <Industrial> is that okay?
[12:03:16] <Industrial> or will it create a lot of stress
[12:03:36] <Industrial> it might be possible to know all the collection names up front, so i could open a reference to each collection beforehand
[12:03:39] <Industrial> (node-mongodb-native)
[12:17:50] <rspijker> Industrial: it might be okay… It will probably be far more optiomal to put a bit of buffering/aggregation in your app though
[12:18:11] <rspijker> so, aggregate them per collection and then do a bulk insert to each collection like once every 5 seconds or something
[12:19:13] <kali> Industrial: there is no benefit of caching db.collection(name)
[12:25:02] <Industrial> okay :)
[12:25:41] <Industrial> I'm pretty small scale now but eventually I want 200 sensors per device * 100-1000 devices
[12:25:52] <Industrial> but well cross that bridge when we get there
[12:33:44] <sweb> kali: ty
[13:21:23] <squeeb> Hey guys, we just updated to 2.6.3 and now we're seeing this error on our dedicated arbiter:
[13:21:26] <squeeb> Error parsing INI config file: unknown option preallocDataFiles
[13:21:52] <squeeb> the 2.6.3 recommends to use this option along with journal.enabled = false for arbiters
[13:21:58] <squeeb> yet both of these options don't appear to exist
[13:22:04] <squeeb> can someone confirm if they've been deprecated?
[13:34:17] <rspijker> squeeb: the config file format has changed
[13:34:37] <rspijker> I’d advise you to convert your cfg files to the new format
[13:34:40] <rspijker> http://docs.mongodb.org/manual/reference/configuration-options/
[13:38:57] <squeeb> arrghagrhgahr yaml
[13:41:34] <czajkowski> Folks have you seen the Call for participation: main tracks and developer rooms at FOSDEM is now open https://lists.fosdem.org/pipermail/fosdem/2014-July/002010.html
[13:50:12] <squeeb> Thanks rspijker
[13:50:24] <squeeb> didn't realise there was a format change :)
[14:35:08] <Industrial> How do I rename a database
[14:36:18] <rspijker> Industrial: https://jira.mongodb.org/browse/SERVER-701
[14:36:57] <Industrial> created 2010, unassigned, low priority :(
[14:53:31] <Industrial> so probably in 3 years :D?
[14:53:57] <kali> have you vote on it ?
[14:54:39] <Industrial> considering the amount of tickets, it's level and the time it was created and it's current status, i dont think +1 will make a dent here
[14:54:42] <Industrial> but, yes.
[15:45:21] <dypsilon> Hi, what is the point of a such detailed RBAC authorization when mongodb forces one user per connection and the connection overhead is pretty high (aprox 10MB per connection)? Do I understand mongodb security model correctly?
[15:51:38] <BaNzounet> Hey there if I've to do "join" thing, I've to do it externaly right?
[15:52:20] <dypsilon> BaNzounet, yep
[15:52:26] <dypsilon> joins are in the application
[15:52:30] <dypsilon> *done
[15:53:34] <cheeser> though typically proper data modeling makes that need rare
[15:55:36] <BaNzounet> I need to do analytics on my data so, I need to join some of my things :) But yeah usualy I don't need it :)
[16:20:53] <jsjc> I am trying to do a find but want same field to be $gt and $lt how can I qery for same id to be in between?
[16:21:21] <jsjc> I been testing a bit but seems I am too newbie to this.
[16:22:12] <harrisii> well well well
[16:22:22] <harrisii> I'm not a bot
[16:22:30] <dypsilon> Hi, what is the point of a such detailed RBAC authorization when mongodb forces one user per connection and the connection overhead is pretty high (aprox 10MB per connection)? Do I understand mongodb security model correctly?
[16:33:20] <adamcom> where are you getting 10MB per connection?
[16:33:28] <adamcom> it's 1MB of stack
[16:35:16] <dypsilon> the information is from here https://blog.serverdensity.com/mongodb-connection-overhead/
[16:36:10] <dypsilon> oh wait
[16:36:23] <adamcom> that's from June 2011
[16:36:24] <adamcom> https://jira.mongodb.org/browse/SERVER-2707
[16:36:25] <dypsilon> there is a way to reduce that to 1MB in this same article
[16:36:33] <adamcom> already done in the code
[16:36:42] <dypsilon> ah nice
[16:36:45] <adamcom> as of July 2011 (version 1.8.3)
[16:36:51] <dypsilon> adamcom, thank you
[16:37:05] <adamcom> no worries :)
[16:37:22] <dypsilon> So is it sane to base the security of the application completely on the mongodb security and create one connection per user?
[16:37:43] <adamcom> weirdly enough, that's the second time I've answered that in a week - don't think it was mentioned for months before that
[16:38:25] <adamcom> dypsilon: I'd never delegate all security to a database, regardless of which one
[16:39:04] <dypsilon> well, not all security but the access control part
[16:39:36] <adamcom> enforce on both, sure, and mis-matches then act as a kind of check sum, but I would not want to be waiting on a bug fix from the database to plug a hole on my app
[16:40:04] <adamcom> plus, if your needs diverge and you need greater granualarity.....
[16:40:11] <dypsilon> that is a good advice, indeed
[16:40:23] <dypsilon> adamcom, thanks again
[16:42:33] <adamcom> and, you have the headache of what happens with connection pooling, connection re-use - if you have to make it such that you tear down all connections…….well let's just say I have seen that (because of a bug with read preferences) and it ain't pretty - I've seen mongod go crazy because Linux was creating and destroying thousands of connections a second, ran out of ephemeral ports and all sorts of other ugliness. Pooling and re-use are needed for really
[16:43:07] <adamcom> not to mention the overhead - even at 1MB per connection, they still add up quickly (20GB of mem for 20,000 connections is still a lot for most people)
[16:43:44] <adamcom> and, you're welcome again :)
[17:31:57] <Nikola_> Unable to stop my balancer. Any advice?
[17:38:37] <Nikola_> Actually I managed to stop the balancer
[17:38:55] <Nikola_> But when i try to manually move a chunk I get error message "moveChunk failed to engage TO-shard in the data transfer: still waiting for a previous migrates data to get cleaned, can't accept new chunks"
[17:41:21] <adamcom> that's exactly what it sounds like - when a chunk is moved off a shard a background delete thread is spawned to clean up the chunk that was moved off
[17:41:39] <adamcom> if too many are active, it won't accept new chunks until the deletes finish
[17:42:03] <adamcom> the deletes should be pretty quick - after all, it's a delete on a chunk that was just read into memory recently in order to be migrated
[17:42:26] <adamcom> but if the shard is struggling, then they will take a while, and so it tells other migrations to back off until they finish
[17:42:52] <adamcom> you can stop them, by stepping down the primary, but you will then have orphaned docs
[17:51:48] <Nikola_> hm. Not sure. There should be no load now on the cluster I used to have 3 replica sets and I added another 6
[17:52:01] <Nikola_> created a new collection on rs7
[17:53:00] <Nikola_> balancer was not distributing this collection so tried manual move of chunk from rs7 to a new shard rs4 and it worked. But i get error when i try to move the chunks to rs0,1,2
[17:53:12] <Nikola_> aka the original shards in the cluster
[17:54:04] <Nikola_> It has been in this state for days no so doubt it is just taking this long to delete the data
[18:07:24] <Nikola_> How can i see what are these chunks that have not been cleaned up yet?
[18:07:41] <adamcom> if you are on 2.6, there is cleanupOrphaned
[18:07:57] <adamcom> before that, it needs scripts
[18:09:48] <adamcom> could be stuck, or the original shards could still be doing deletes - they would have been the sources of all migrations initially since the others were empty, so a lot of migrations drom them
[18:10:53] <adamcom> as mentioned, you can step down the primary, clean up the orphans later with that command on 2.6
[18:11:00] <adamcom> for 2.4, there is a JS version: https://github.com/mongodb/support-tools/tree/master/orphanage
[18:11:06] <adamcom> (use at your own risk)
[18:27:24] <user123321> Is it possible to use 2 or more identical monodb servers to connect to the same data storage available in a separate server?
[18:33:11] <cheeser> you want muliple mongod processes to write to the same data files?
[18:35:14] <user123321> cheeser, Yes.
[18:36:16] <user123321> cheeser, I'd have minimum 2 identical Apache servers, load balanced, and I thought of using one common storage for all of the Apache servers.
[18:36:18] <cheeser> no, that'd be bad.
[18:36:29] <user123321> cheeser, any advice on my scenario?
[18:37:32] <user123321> common storage would contain mongo db database
[18:39:34] <cheeser> why would the apache servers matter?
[19:08:55] <user123321> cheeser, sorry? hmm, I'd like to make my Apache servers connect to a one data store, is this ok?
[19:09:53] <cheeser> well, your client code would just talk to whever mongod is running.
[19:12:04] <user123321> cheeser, Solution 1: Installing 2 mongo DB servers in both servers, pointing to the remote storage. Solution 2: Install Mongo DB server in the common storage, and let the clients connect.
[19:12:10] <user123321> cheeser, Am I right or..
[19:13:21] <cheeser> option 2
[19:13:33] <user123321> bingo
[19:14:07] <user123321> cheeser, Is the configuration difficult for the option 2?
[19:14:32] <user123321> cheeser, Oh I mean, what would happen if both servers try to read/write data?
[19:14:51] <user123321> both clients*
[19:15:01] <cheeser> it's just a uri your client code passes to the driver either way
[19:15:22] <cheeser> the clients talk to the server and the server handles each request separately
[19:16:00] <user123321> cheeser, I see. So I don't need to do special configurations for simultaneous access?
[19:16:06] <cheeser> nope
[19:16:09] <user123321> cool
[19:16:33] <user123321> cheeser, how can I address the failure of mongo DBMS?
[19:16:38] <user123321> if it ever happen.
[19:17:03] <user123321> fail over dbms?
[19:17:57] <cheeser> you'd use replica sets
[19:19:12] <user123321> cheeser, I see, is my option 1 bad in this context?
[19:20:17] <cheeser> it's impossible
[19:20:26] <cheeser> two processes can not write to the same files.
[19:21:08] <user123321> cheeser, even with any type of configurations? ah ok.
[19:21:45] <user123321> I mean, could one server wait till the other one finishes?
[19:22:20] <cheeser> nope
[19:22:26] <user123321> Aha
[19:24:07] <user123321> cheeser, one of my friend has hosted mongo DB in a remote host, I could access it with robotmongo, question is, could I get a copy of that db and host it in mine?
[19:24:51] <cheeser> sure
[19:25:23] <user123321> cool, thanks.
[19:55:58] <nylon> lo
[19:56:02] <nylon> quick question
[19:57:03] <nylon> if using text search, how does one limit results returned when searching decimal numbers?
[19:57:41] <nylon> example: 123.45 should only return exact matches not 123.78
[19:57:51] <nylon> why is this happening?
[19:57:54] <nylon> please advise
[19:59:46] <cheeser> hard to say without seeing your query
[20:02:22] <nylon> query: { "text" : "ItemData", "search" : "123.45", "limit" : 20000 }
[20:02:39] <nylon> index is wildcard, searching all fields
[20:03:38] <saml> nylon, limit on cursor
[20:03:57] <saml> db.docs.find({ "text" : "ItemData", "search" : "123.45"}).limit(20000) ?
[20:05:05] <nylon> saml: how will that refine the results to only include 123.45 and not 123.56 ?
[20:05:22] <saml> nylon, give me few example docs
[20:05:41] <saml> and give me which of the docs you want to query
[20:05:54] <saml> in paste bin
[20:08:13] <cheeser> why would have numerical data stored as text and then text search on it?
[20:09:18] <nylon> cheeser: because they are actually product codes, which could contain alpha and numeric?
[20:10:09] <nylon> eg. 123.45 is a valid product code, as well as 123.6X
[20:11:04] <nylon> all of which can appear in product description, product number, product notes, etc
[20:11:54] <nylon> saml: hmm, what would be the best way to give you these examples?
[20:11:59] <cheeser> nylon: ahhhh
[20:12:46] <saml> nylon, gist.github.com ?
[20:13:23] <nylon> saml: k... please give me a moment
[20:52:56] <nylon> saml: https://gist.github.com/anonymous/c225b8ff0193e5a3b8ad
[20:53:13] <saml> what do you want the query result be nylon ?
[20:53:45] <nylon> only matches 232.34
[20:54:01] <saml> db.docs.find({ModelNumber: '232.34'})
[20:54:20] <saml> db.docs.find({ModelNumber: '232.34'}).limit(3) only first 3
[20:58:23] <nylon> it's a text search so i can't limit it to just one field, because the search text maybe contained in the itemName field or other fields (which i've had to omit due to data being sensitive
[21:01:08] <saml> oh i don't know then.. i use solr for full text search. haven't used full text search in mongo
[21:01:36] <saml> you might have to combine all fields
[21:03:11] <nylon> anybody here a text search expert?
[21:26:29] <staykov> if im using $elemMatch is there a way to also get fields from the parent object?
[21:26:55] <staykov> following: http://docs.mongodb.org/manual/reference/operator/query/elemMatch/ i mean can i also get a field from grades?
[21:27:17] <staykov> i am trying it by putting the field in the selector but its not working, just checking if its possible
[21:31:52] <staykov> nevermind i wasnt using my lib properly