PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 9th of August, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:37:25] <[Outcast]> I have just start working custom function. I am fine getting thing working on the CLI. I have two questions while working with pymongo and python:
[01:38:36] <[Outcast]> 1. is there to get pymongo to execute the db.loadServerScripts() function?
[01:39:19] <[Outcast]> 2. How do you get pymongo to use the function that where loaded?
[01:59:20] <[Outcast]> look like I am going to have to use the eval function for which can make thing insecure.
[07:56:34] <[AD]Turbo> hola
[08:04:54] <Xedecimal> I don't think I fully understand this, I have a unique column set in my ensureIndex, then I run save(vals, {safe: 1}), as that's a shortcut to do an upsert, udpate or create new, then I get an error saying "duplicate key" ... Isn't the entire point of this to update if your key overlaps ?
[08:05:31] <Xedecimal> Something tells me it's specifically because one of those 3 keys that was found as a duplicate is 'null'
[08:08:17] <Xedecimal> I've read about null values and using 'sparse' in my index generation... Yet I don't think I actually need that because the two entries that have 'null' really are the same thing and should be updated ?
[08:09:03] <NodeX> is it a sparse index?
[08:09:23] <Xedecimal> not currently, I'm doing some tests after switching it over to sparse to see if the behavior improves
[08:18:50] <Xedecimal> and finally, got the same error with sparse on
[08:21:19] <Xedecimal> what if it's not null? If I completely omit it will we be ok ?
[08:24:27] <NodeX> if you already have one with that uniqueness then it will fail
[08:25:40] <oskie> hello, is it correct that normal users can create databases?
[08:26:15] <oskie> it seems so to me
[08:26:17] <NodeX> yup
[08:26:24] <NodeX> anyone can do anythign
[08:26:35] <oskie> but with --auth enabled?
[08:26:43] <Xedecimal> NodeX: It should fail to insert, but what about save()? This is supposed to naturally upsert right?
[08:27:00] <NodeX> auth only takes care of read/write iirc
[08:27:49] <NodeX> well if it already exists then it will update
[08:28:16] <Xedecimal> that's my problem, I'm using save() and it's telling me duplicate key, this is through php too by the way in case that has any relevance
[08:28:20] <oskie> there is something wrong here... I will make a script
[08:29:52] <NodeX> duplicate key on what?
[08:30:47] <Xedecimal> I have 3 keys, path, parent and index, both have same index, parent and null for path... I'm starting to think there are other indexes that didn't get removed from the past
[08:31:21] <NodeX> look for those keys in a find and see what they match
[08:33:56] <Xedecimal> if I can get it to do it again, very hard to reproduce
[09:18:17] <Init--WithStyle-> I want to bring a binary file into my mongoDB and sort it into a different structure
[09:18:22] <Init--WithStyle-> Any idea how I might begin?
[09:18:44] <NodeX> Gridfs?
[09:19:32] <Init--WithStyle-> what is Gridfs?
[09:19:40] <mids> what do you want to do with it?
[09:20:17] <Init--WithStyle-> I just want to convert it from a big flat binary file into a 2d array of data entries with multiple pieces of data on them
[09:20:32] <Init--WithStyle-> eg. right now: 0x23, 0x44, 0x57 <-- the binary file
[09:20:59] <Init--WithStyle-> Instead: tile 1,4 elevation = 0x23, tile 1,5 elevation = 0x44
[09:21:03] <Init--WithStyle-> if that makes any sense?
[09:21:19] <mids> and what is the role of MongoDB in that?
[09:22:17] <Init--WithStyle-> it holds the data..
[09:23:20] <Init--WithStyle-> *blink*
[09:23:53] <mids> so the converted format, will you query that 2d array somehow?
[09:24:24] <mids> or just stick it in mongodb and then consume it in one piece
[09:24:32] <Init--WithStyle-> I need to query it
[09:25:07] <mids> how?
[09:26:28] <Init--WithStyle-> The array is basically 2d cartesian points
[09:26:39] <Init--WithStyle-> each point has data attached to it... elevation, stickyness, humidity, etc..
[09:26:53] <Init--WithStyle-> I will need to query x,y elevation for example
[09:29:00] <mids> so maybe create a document per tile; with a geo index on it
[09:29:38] <mids> then you could query on all tiles above a certain elevation near point (20,32)
[09:30:05] <mids> check out http://www.mongodb.org/display/DOCS/Geospatial+Indexing/
[09:30:17] <cmex> hi all
[09:30:51] <cmex> what is the best free management tool for mongo db (i not meaning console)
[09:31:22] <mids> tried https://mms.10gen.com/ ?
[09:31:35] <cmex> nope just tried mongovue
[09:32:01] <Init--WithStyle-> a document per tile... sounds good
[09:32:14] <Init--WithStyle-> mids: my problem is how to convert this binary file
[09:32:20] <Init--WithStyle-> how to extract the elevation data and drop it into my document
[09:32:41] <cmex> and another question
[09:33:07] <mids> Init--WithStyle-: that is not really something mongodb can help you with though. you'll need to implement some conversion code before inserting the data into mongo
[09:33:23] <mids> Init--WithStyle-: what language will you implement this in?
[09:34:00] <Init--WithStyle-> javascript.. but I only need the data from this binary file converted once every few months
[09:34:00] <cmex> can we put an 2.2.0 at production . som1 tryed it yet?
[09:34:59] <mids> cmex: "Development Release (Unstable)"
[09:35:24] <mids> cmex: I wouldnt put that into production yet
[09:36:06] <mids> Init--WithStyle-: javascript as in node js?
[09:36:16] <Init--WithStyle-> yes mids
[09:36:32] <mids> Init--WithStyle-: http://stackoverflow.com/questions/5784621/how-to-read-binary-files-byte-by-byte-in-node-js
[09:36:37] <jhsto> I'm using nodejs native driver for mongodb and I would like to search for data stored in my database under 'url' identifier, which has to be then compared with a string... Any help on how to retrieve the value only, so that it can be compared?
[09:37:24] <Init--WithStyle-> mids would I need to create a utility on my node.js server that somehow takes the binary file as an upload then puts it into my mongoDB in the structure I want?
[09:37:38] <jhsto> If I try to print out the document, I get all this information about what I dont know anything.
[09:37:44] <mids> Init--WithStyle-: yeah, that would be my suggestion
[09:37:57] <Init--WithStyle-> Seems like a bit of a waste since I will only be doing this every half year or so..
[09:38:54] <mids> Init--WithStyle-: what is a waste?
[09:39:08] <Init--WithStyle-> coding a small utility
[09:39:38] <wereHamster> Init--WithStyle-: uhm.. so what is your suggestion? Hack the mongodb source to understand your custom binary format?
[09:39:42] <mids> hey, it is your problem :P
[09:39:51] <mids> if you dont need the data, you can way 6 months and reconsider it
[09:39:58] <Init--WithStyle-> lol
[09:40:03] <mids> or pay someone to do it for you
[09:42:08] <mids> jhsto: on the 'find' function you can specify which fields you are interested in: v
[09:42:12] <mids> https://github.com/mongodb/node-mongodb-native/blob/master/Readme.md#find
[09:42:40] <jhsto> ugh, im going to need a lot of coffee for this one
[09:43:03] <jhsto> oh
[09:43:07] <jhsto> thanks mids
[09:43:16] <jhsto> i did not notice the fields option
[09:43:17] <jhsto> thanks
[09:43:57] <mids> but.. do you retrieve the value and then compare it in nodejs?
[09:45:06] <jhsto> thats what im trying to do
[09:45:09] <mids> why not do the query on 'url' via mongodb?
[09:45:47] <jhsto> I dont know what you are talking about, I actually started working with mongodb this week
[09:47:18] <mids> okay, no worries
[09:47:22] <jhsto> The script is supposed to first try to find a url value, and if found, check another value of the same document. If else, the script will continue to create new id
[09:47:36] <jhsto> I mean the same url value.
[09:53:08] <jhsto> alright mids - I got it working with fields option. Huge thank you for this.
[09:53:33] <mids> cool
[10:08:32] <jhsto> mids, its now responding me with [ { answers: 5 } ] - however, i cant get it parsed so that i would only have the answers value, in this case 5?
[10:09:53] <mids> can you pastebin your code?
[10:12:20] <wereHamster> jhsto: x = [ { answers: 5 } ]; five = x[0].answers
[10:13:38] <jhsto> wereHamster, it gives me undefined
[10:13:51] <jhsto> it seems to not be valid json, so ill just parse it old way
[10:14:36] <wereHamster> jhsto: run this in the mongos shell: x = [ { answers: 5 } ]; print(x[0].answers)
[10:14:40] <wereHamster> does it print '5' ?
[10:14:50] <jhsto> it prints undefined
[10:15:00] <jhsto> wait
[10:16:21] <jhsto> the fact that im on node and the db is on cloud...
[10:16:27] <jhsto> ill just do the parse
[10:40:37] <wereHamster> node -e 'x = [ { answers: 5 } ]; console.log(x[0].answers)'
[10:40:41] <wereHamster> this also prints '5'.
[10:40:53] <wereHamster> if it doesn't, then your system is seriously broken.
[10:51:43] <cmex> som1 uses c# driver here?
[10:52:21] <riot> ohai everyone. I'm quite new to mongodb and i'm aiming to use it as storage for map data (openstreetmap) to render via mapnik. All written in python. Oh, i have a nice MongoDB coffee mug =) Anyone playing around with GIS? Maps? Mapnik?
[10:53:18] <riot> oh, and i'm totally interested (as in *NEEDS*) in an armhf port. Is this already possible to build?
[10:53:57] <riot> i saw some patches and the ticket having some higher priority.. but took only a short (promising) glance
[11:25:27] <SisterArrow> Hiya
[11:26:38] <SisterArrow> Im trying to figure out how mongo stores stuff on disk. I have lots and lots of documents (~100 000 000) with an avarage size of 4kb each. Each of these documents have a "product_hash". Every day i insert a new document which may or may not have a previous document with the same product_hash.
[11:26:56] <SisterArrow> I query the database all the time for product_hash. It may return 1 document or 1000.
[11:27:02] <SisterArrow> Im trying to tune the read ahead.
[11:27:12] <SisterArrow> I have an index on product_hash
[11:27:33] <SisterArrow> So say I have 300 documents for product_hash:blargh.
[11:27:54] <SisterArrow> Will mongo store these 300 documents sequentially on disc since I have a index for product_hash?
[11:28:00] <SisterArrow> Or will it be spread across randomly?
[12:59:48] <nebojsa_kamber> Hello everyone
[13:00:14] <NodeX> Hello nebojsa_kamber : welcome to MongoDB, we are your pleasure
[13:00:59] <nebojsa_kamber> I'm having trouble installing the Mongo driver for PHP, does someone have the time to help me?
[13:01:11] <Derick> nebojsa_kamber: state the problem, and we perhaps can
[13:02:52] <nebojsa_kamber> I tried to install the PHP driver through PECL, as stated in many tutorials, but whenever PHP tries to connect it fails with the following error: You have too many open sockets (7035) to fit in the FD_SETSIZE (1024). The extension can't work around that.
[13:03:18] <nebojsa_kamber> I managed to digg out that it has something to do with Apache
[13:03:37] <Derick> does apache really have that many open sockets?
[13:04:33] <NodeX> file descriptors :/
[13:04:40] <Derick> yes
[13:04:41] <nebojsa_kamber> How can I find that out?
[13:04:43] <NodeX> netstat -pan | grep apache | wc -l
[13:04:50] <NodeX> netstat -pan | grep httpd | wc -l
[13:04:58] <NodeX> (can't remember what apache's process name is)
[13:05:04] <Derick> could be either
[13:05:43] <nebojsa_kamber> just a sec, I'll check
[13:06:34] <Derick> NodeX: actually, it's both FDs and sockets
[13:06:48] <NodeX> you can either set your file descriptors higher or fix the apache leak
[13:07:04] <Derick> no
[13:07:11] <nebojsa_kamber> Nope, seems to have only 9 :/
[13:07:14] <Derick> you can't increase FDSET size without recompiling libc
[13:07:30] <nebojsa_kamber> I was hoping to avoid recompiling..
[13:07:56] <NodeX> lsof | wc -l
[13:08:01] <NodeX> see how many is there
[13:08:09] <Derick> nebojsa_kamber: you really don't want to recompile libc anyway
[13:08:18] <Derick> An fd_set is a fixed size buffer. Executing FD_CLR() or FD_SET() with
[13:08:18] <Derick> a value of fd that is negative or is equal to or larger than FD_SETSIZE
[13:08:19] <Derick> will result in undefined behavior.
[13:09:45] <Derick> it's on the todo list to change our use of select to poll, which doesn't have this issue
[13:11:10] <nebojsa_kamber> lsof | wc -l doesn't seem to print out anything..
[13:11:59] <NodeX> you are running linux right?
[13:13:27] <nebojsa_kamber> yes, Apache is on the Ubuntu machine
[13:14:19] <nebojsa_kamber> I'm sorry, it does print out "484"
[13:15:34] <algernon> Derick: err, you can. It's just fugly to do that.
[13:15:36] <nebojsa_kamber> Derick: Does that mean it's not fixable?
[13:15:46] <Derick> algernon: not portable
[13:16:03] <Derick> nebojsa_kamber: it is, but it's not the first thing on the list
[13:16:05] <algernon> Derick: that's true. :)
[13:16:23] <Derick> algernon: only with a hack, and I'd rather fix it properly than hacking it
[13:17:04] <algernon> well, on BSD, it's not even hackish, as far as I remember. But yes, a proper fix is a thousand times better, but that also takes considerably more time.
[13:17:31] <Derick> maybe not
[13:17:34] <Derick> poll isn't that tricky
[13:17:35] <nebojsa_kamber> I understand. Is there a workaround to get it to work? When I installed the PHP driver on my local Fedora box from a RPM, it worked like a charm.. was hoping Ubuntu would be just as easy..
[13:20:15] <Derick> it should be if you don't have some many open files/sockets
[13:21:53] <nebojsa_kamber> Shouldn't raising the open_files with ulimit -n help?
[13:22:22] <Derick> no
[13:22:24] <Derick> it won't
[13:23:23] <Derick> we added this check because of a bug fix: https://jira.mongodb.org/browse/PHP-391
[13:25:05] <nebojsa_kamber> It's stated as fixxed in 1.2.11 ?
[13:25:20] <Derick> 1.2.11 just introduced the warning to prevent the segfault
[13:25:53] <nebojsa_kamber> Oh, I understand..
[13:32:41] <nebojsa_kamber> Is there a way to check if the FD_SETSIZE is actually 1024? Because our sysadmin has strong belief that the limit is raised to 65.000
[13:33:04] <Derick> you can't FD_SETSIZE like that
[13:33:18] <Derick> it's not linked to the filedescriptor limit
[13:34:26] <nebojsa_kamber> Too bad.. I was hoping that our company could give MongoDB a try..
[13:34:57] <Derick> mongodb still works ;-)
[13:35:09] <Derick> but we're aware of this, and it will get changed
[13:35:18] <Derick> bjori: are you copying this?
[13:37:22] <bjori> no
[13:37:34] <Derick> scroll up then :-)
[13:39:31] <nebojsa_kamber> Well, we're mainly PHP devs, so it'd be hard getting my colleguages to use driver other than PHP.. I'll guess I'll write JS or something..
[13:39:42] <Derick> hehe, I understand
[13:39:54] <Derick> it's often not really been a problem as having that many open FDs is really odd
[13:40:54] <bjori> 14:03:37 < nebojsa_kamber> Nope, seems to have only 9 :/
[13:41:10] <Derick> I doubt that
[13:41:13] <bjori> that doesn't sound right :)
[13:41:21] <Derick> apache starts with log files open should be more already
[13:41:40] <bjori> nebojsa_kamber: is this on your localhost?
[13:43:28] <nebojsa_kamber> Now it's even lower, 2-3. Yes, this is our local+staging server, used only in our VPN
[13:44:05] <Derick> it can't be 2
[13:44:06] <Derick> ever
[13:44:18] <Derick> as there is always stdin, stdout and stderr open
[13:44:23] <Derick> how are yo uchecking this again?
[13:45:06] <nebojsa_kamber> I asked the sysadmin to run as root "netstat -pan | grep apache | wc -l"
[13:46:58] <Derick> that's just ip connections
[13:47:00] <Derick> not fds
[13:47:30] <bjori> lsof | wc -l
[13:47:49] <Derick> don't forget to grep for "httpd" or "apache"
[13:48:01] <Derick> otherwise you get all fds of all processes :-)
[13:48:07] <Derick> and the limit is per-process
[13:49:52] <nebojsa_kamber> 78671
[13:50:34] <Derick> did you split it out per apache process?
[13:50:50] <nebojsa_kamber> Yes, we grep'ed for "apache"
[13:51:31] <nebojsa_kamber> Wait, what do you mean by "split it out per apache process"? :)
[13:51:41] <bjori> or i in `pidof apache2`; do lsof -p $i | wc -l; done
[13:51:51] <bjori> for i in `pidof apache2`; do lsof -p $i | wc -l; done
[13:53:43] <Derick> echo the pid too though
[13:54:30] <nebojsa_kamber> it prints out several lines of numbers, all around 7153
[13:54:57] <Derick> for i in `pidof apache2`; do echo -n "$i "; lsof -p $i | wc -l; done
[13:55:06] <Derick> right, then for one of the pids, dump the list somewhere online:
[13:55:18] <Derick> lsof -p <thepid> > /tmp/pids.txt
[13:55:33] <Derick> (as root)
[13:58:00] <bjori> for i in `pidof apache2`; do if [ `lsof -p $i | wc -l` -gt 7000 ]; then lsof -p $i; break; fi; done
[13:58:24] <bjori> well, pipe it to file, and put it on pastebin somewhere :)
[13:59:35] <nebojsa_kamber> Ok, will do, just a sec
[14:07:09] <nebojsa_kamber> for just one process, the dump is over 1MB :D
[14:07:36] <nebojsa_kamber> It seems way too big for some text pasting providers
[14:10:37] <nebojsa_kamber> Will this do? https://dl.dropbox.com/u/60140040/p1.txt
[14:11:28] <algernon> that's a lot of log files.
[14:11:56] <bjori> nebojsa_kamber: whaaaat.. you are running all that on one server?
[14:12:03] <algernon> they'll quickly use up the <1024 fds
[14:14:08] <nebojsa_kamber> Our sysadmins tried to be frugal
[14:14:25] <nebojsa_kamber> :)
[14:15:19] <nebojsa_kamber> we're runing around 8 production sites, and have 11 PHP devs, and each one has at least 4 dev domains..
[14:15:36] <nebojsa_kamber> 8*11*4*3..
[14:16:22] <nebojsa_kamber> Is there a better way to do this, so I can suggest them?
[14:17:21] <bjori> this isn't exactly the most optimal setup..
[14:18:11] <nebojsa_kamber> Is that an understatement? :)
[14:21:28] <nebojsa_kamber> Ok, thank you for all your help, I'll try to convince my sysadmins to ratinally structure the logs
[14:21:39] <nebojsa_kamber> ..so we don't hit the FD limit
[14:22:49] <bjori> nebojsa_kamber: so.. yes. like Derick mentioned we will be fixing this "very very soon"
[14:23:08] <bjori> nebojsa_kamber: but your current setup is very evil and not something you would want to use :)
[14:23:35] <bjori> I understand it isn't exactly trivial to fix it either though :]
[14:23:54] <bjori> nebojsa_kamber: the only thing I can recommend is to setup a different vm
[14:25:28] <nebojsa_kamber> That's what my sysadmin recommended
[14:27:12] <nebojsa_kamber> Will make an additional VM. Thank you all, again, for your patience. Have to go, I'll miss the MongoDB Webinar :)
[14:27:15] <nebojsa_kamber> http://www.10gen.com/events/webinar/intro-to-schema-design
[15:38:48] <souza> Hello guys, i'm have to iterate in an array in C language, but i've no idea how to do this, someone knows how can i achieve this, or link me to some site? =)
[15:39:48] <ron> souza: ##c
[15:40:17] <ron> not sure why ask here at all.
[15:41:40] <souza> ron: thanks i'll look at C Channel
[15:43:57] <Bartzy> Hi
[15:45:38] <Bartzy> If I have an ObjectId as _id in a comments collection for example, where I need the date and time of each comment - do I need a datetime field too ?
[15:45:45] <Bartzy> or the timestamp in the _id is sufficient ?
[15:51:48] <algernon> depends on how much resolution you want for the timestamp, and whether you trust your oids
[16:00:07] <bjori> Bartzy: you would probably want an additional datetime field too, say if you would like to get all comments from one day.. you can't really do such queries against the id :)
[16:03:35] <joeljohnson> hey guys, I have a ~100MB json file that I want to import. It looks like this: http://pastie.org/4434504
[16:04:05] <joeljohnson> and I try to import it like this: mongoimport --file toImport.json -c myData
[16:04:21] <Bartzy> bjori: If I only want to sort by them ?
[16:04:27] <Bartzy> And show the date time on the comment
[16:04:31] <joeljohnson> and I get this error: exception:unknown error reading file
[16:04:36] <Bartzy> bjori: Why would I want to get all comments from one day ? :P
[16:04:38] <joeljohnson> any idea why?
[16:12:03] <bjori> joeljohnson: have you validated the file?
[16:12:57] <bjori> Bartzy: idk... I would still use a separate datetime field
[16:14:02] <joeljohnson> bjori: I've done this: cat toImport.json | python -mjson.tool to look at it formatted, and it didn't have a problem. Do you know of a quick way to do full json validation?
[16:15:28] <bjori> joeljohnson: I usually use jsonlint.com.. but for a 100mb file that probably isn't an option :)
[16:15:37] <joeljohnson> :)
[16:17:23] <joeljohnson> looks like it's failing on this line… https://github.com/mongodb/mongo/blob/master/src/mongo/tools/import.cpp#L131
[16:17:26] <bjori> joeljohnson: see if --stopOnError gives you any better errormsg
[16:17:33] <joeljohnson> not sure what that means.
[16:17:39] <joeljohnson> bjori: ok.
[16:18:13] <joeljohnson> bjori: nope :(
[16:21:00] <joeljohnson> bjori: I ran the json through a formatter, then tried it on the new file, and the error changed
[16:21:01] <joeljohnson> http://pastie.org/4434606
[16:23:32] <bjori> invalid utf8?
[16:28:14] <joeljohnson> weird, must be. But there shouldn't be any UTF8 in there
[16:29:56] <bjori> joeljohnson: hmh? what sort of data is it?
[16:31:26] <joeljohnson> it's data generated by our test suite when we run our tests
[16:31:37] <joeljohnson> class/method names
[16:31:45] <joeljohnson> so this json is generated by java
[16:37:24] <joeljohnson> I don't think it's invalid UTF8 characters. I just used a tool to strip out all invalid UTF8 and it gives me the same error
[16:39:53] <tunele> hallo everyone. I have a problem with replica set and "couldnt determine master". I have searched both in the official mongo google group and jiira, and found similar problems, but none of the workaround proposed seems to fix my problem.
[16:42:25] <tunele> I have three mongo nodes, 1 of them is arbiter. I'm running latest mongo version and latest mongo php driver. rs.status() on any node tells me that everything is working fine. But when I connect with php, I get the following error: couldn't determine master".
[16:49:56] <linsys> tunele: Does rs.status() show a node as a master? Also in your php config are you listing all of the mongodb nodes? or just one?
[17:11:50] <skot> Generally you want a seed list of a few.
[17:12:10] <skot> rs.status() show which node is primary, but db.isMaster() show this more clearly/succinctly
[17:21:12] <cedrichurst> mapreduce question, let's saying i'm reducing a sales collection into a customer collection
[17:21:44] <cedrichurst> sales has the structure {_id: …, customerId: 1234, price: 102.39}
[17:22:06] <cedrichurst> and customer has the structure {_id: 1234, name: 'ABC Widgets', price: 0}
[17:22:33] <cedrichurst> i want to do a mapreduce from sales that repopulates only the price value, without replacing the whole key
[17:22:37] <cedrichurst> is this possible?
[17:33:58] <skot> sorta, you have to replace all the values but you can just update the price
[17:34:09] <skot> you want to use the output:reduce option
[17:34:37] <skot> http://www.mongodb.org/display/DOCS/Mapreduce#MapReduce-Outputoptions
[17:42:11] <ashley_w> how can i use a variable in a regex in the CLI tool? /$user/ does not work.
[17:45:28] <skot> can you post your shell session with an example of what you are trying to do? (please use pastie/gist/etc)
[17:50:32] <ashley_w> skot: http://pastie.org/4439965
[17:53:09] <skot> and you want user to a javascript variable?
[17:53:17] <skot> where are you defining that variable?
[17:55:08] <ashley_w> in a previous line. it works for "{'address' : $user}"
[17:55:28] <ashley_w> it's an email address, so that could be causing problems.'
[17:55:57] <skot> in javascript variable don't include a $ prefix usually
[17:56:21] <skot> but there is nothing against it.
[17:56:47] <skot> You should check to make sure your regex evals correctly
[17:57:29] <ashley_w> so, how do i get a variable in a regex?
[17:58:10] <skot> I would guess using string concat with RegExp
[17:58:20] <skot> RegExp(".*" + $var)
[17:58:23] <skot> for example
[17:58:30] <ashley_w> thanks
[17:58:33] <skot> np
[18:02:24] <ashley_w> skot++
[20:21:39] <elarson> would these queries be considered equivalent: {$and: [{x: 1, y:2}]} == {x: 1, y: 2}
[20:22:56] <ashley_w> no
[20:23:23] <ashley_w> the second will return if x is 1 or y is 2
[20:24:04] <skot> They are the same as the $and array has only one item.
[20:24:27] <skot> {x: 1, y: 2} mean that x must be 1 and y must be 2 in the document to be returned
[20:25:38] <skot> ashley_w: that is incorrect, "and" is implied and you need to explicitly use $or to get what you describe.
[20:26:28] <ashley_w> skot: oh? pretty i'm pretty damn sure that hasn't been the case for me, but i'm no expert.
[20:28:44] <elarson> skot: whoops!
[20:28:46] <elarson> typed that in wrong
[20:29:03] <elarson> the and expression should have been: {$and: [{x: 1}, {y:2}]}
[20:37:47] <ashley_w> skot: thanks, and sorry about that elarson. but when i was first learning (which was pretty recent), not using $and wasn't working for me. dunno what i did wrong then.
[20:38:44] <elarson> ashley_w: I think they are not equivalent
[20:39:10] <ashley_w> they might not be, but i was still wrong. :)
[20:39:28] <elarson> or at least I could see how your data could have made it seem like the latter is like an OR
[21:40:11] <cedrichurst> is there any way to get rid of the 'value' property in mapreduce and write straight to the underlying collection object?
[21:47:30] <crudson> cedrichurst: vote for this https://jira.mongodb.org/browse/SERVER-2517
[21:48:38] <crudson> and read discussion for how some of us are working with it for now
[21:50:33] <sapht> what's the preferred way to sort a query based on a numerical field? is it fast enough to use .sort in a >10000 document collectino or should i create capped aggregates and sort using $natural?
[21:51:15] <sapht> i could limit the scan to a maximum of maybe 1000 items but no less
[21:57:20] <cedrichurst> crudson: i couldn't really find many examples of how people are working with it for now
[21:57:39] <cedrichurst> unless you're referring to the eval thing
[22:00:55] <crudson> cedrichurst: some for each loop that merges _id and value to top level document attributes, either eval or across a client driver. There is no way currently to not have reduce value embedded in 'value'.
[22:47:14] <emperorcezar> Can I declare fields unique together?