PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 3rd of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:14] <VooDooNOFX> proteneer: Of course, you'd likely be better off sharding that much data over several servers, and allow each to retrieve local data it's responsible for.
[00:02:19] <VooDooNOFX> proteneer: rather than try and store it on a zfs or gpfs volume and just having a single server hit that storage.
[01:28:42] <Richhh> any difference between collection.find().toArray(function( ... and collection.find({}, function(err, res) { res.toArray(function(... ?
[01:50:11] <sinelaw> hi! I'm using a text index and it's quite slow. how can I make it faster?
[01:51:09] <sinelaw> getting the first 100 results is fast, but specifically I'm trying to find the number of matching docs, so i have to set a very large limit
[02:03:01] <joannac> why not .count()
[02:04:09] <joannac> also, .explain() and see what mongodb is doing
[02:05:45] <sinelaw> joannac, the docs seem to imply that for text indexes I should use runCommand('text',...)
[02:05:56] <sinelaw> so count() is not available
[02:08:27] <_sri> the mongodb inc folks are prolly already used to the new $text operator from 2.6 :)
[02:09:39] <sinelaw> is it significantly different?
[02:10:01] <_sri> http://docs.mongodb.org/master/reference/operator/query/text/#op._S_text
[02:11:17] <_sri> point is that the operator works with different commands such as count
[02:11:39] <sinelaw> oh i'll try installing 2.6
[02:26:45] <groundup> I wish Mongo had the ability to add constraints to documents. It would only be an insert/update thing. I know this is something to do in the app and documents are schemaless. Still something that would make it better.
[05:53:53] <greybrd> while creating a replica set with two members, both the nodes are in primary even after adding using rs.add() from primary. what am I doing wrong. please gimme some pointers to solve the issue.
[06:36:19] <joannac> um, that shouldn't be possible
[06:36:27] <joannac> can you pastebin a rs.status()?
[06:37:42] <joannac> oh, they're gone
[07:19:38] <RaviTezu> Hi, I'm running rs.status() on replicaset members and I don't see "syncingTo" getting printed in the output.
[07:19:38] <RaviTezu> Someother hosts are printing it. Can some one help? Thanks
[07:20:14] <kali> mmmm i would assume their are the primary
[07:20:30] <kali> or they are not the same version
[07:25:03] <RaviTezu> kali: Thanks, they're primary nodes
[08:14:06] <greybrd> hi there. how to replicate a particular database in mongodb?
[08:40:49] <samurai2> use clone
[08:40:51] <samurai2> or copy
[08:41:39] <samurai2> greybrd : http://docs.mongodb.org/manual/reference/method/db.copyDatabase/
[08:42:42] <samurai2> greybrd : copy db doesn't need you to block other operation while you do it
[08:43:03] <samurai2> clone will need to do write lock to the database you want to replicate
[08:43:49] <greybrd> samurai2: thanks.. that something I don't know. write lock is not a problem. but how frequently running this command is a question mark.
[08:44:07] <Nodex> I think he means property replication no?
[08:44:19] <greybrd> where as in replicate, I don't have to bother.
[08:44:28] <samurai2> o then you need a replica set right?
[08:44:41] <greybrd> Nodex: no. it's just a normal replication. except, not all databases. only that I specify. like in mysql.
[08:44:53] <greybrd> samurai2: I'm already using replica set.
[08:45:04] <samurai2> o so then why you need to copy it again?
[08:45:06] <Nodex> greybrd : yes that's what I said
[08:45:26] <Nodex> proper* not property sorry
[08:46:03] <greybrd> my problem here is. I'm collecting syslog events into a database, with a ttl of 1 hours. I'm mapreducing the events to a condensed format in other database with in a single instance. now I want to replicate the condensed database. not a raw one.
[08:46:22] <greybrd> because it will have an insert rate of a few hundreds per second.
[08:46:26] <Nodex> best to use a tailable cursor
[08:47:29] <greybrd> Nodex: how will that solve my problem. anyway, thanks for the tip. I'll look into it now
[08:47:47] <samurai2> so then just copy the database where you put the result of your map/reduce :)
[08:50:34] <Nodex> greybrd : you wanted to replicate a SINGLE database. That is what a tailable cursor will allow you to do
[08:54:38] <greybrd> samurai2: I'll look into the db.copy(). I think that would be simple. but need to analyze the complications in a production environment when compared to replication.
[08:55:20] <greybrd> Nodex: thanks... I'm writing a sample in java for a tailable cursor now. will get back to your guys on that.. soon..
[08:55:26] <greybrd> samurai2: thanks a lot buddy. .
[08:55:55] <samurai2> greybrd : your welcome. good luck :)
[09:54:28] <rspijker> I have a question about authentication on a sharded cluster. I want to connect to a single shard server (mongod), but authentication fails on it, no matter what. Authentication with the same credentials works fine on the mongos instance. I believe the mongod has it's own admin db and, therefore, it's own set of users. Is there any way for me to auth to the mongod?
[10:05:59] <jordig> Hello, I
[10:06:56] <harryy> I too. :D
[10:07:31] <jordig> Hello, I'm using MongoDB aggregation to get some data from a database, on the $project phase, I'm struggling to get the document _id as a normal string (What I do get is a MongoId Object as default). I'm using php and I'm creating sort of a simple JSON API.
[10:20:17] <rspijker> jordig: you can do .str
[10:23:05] <jordig> rspijker: .str?
[10:23:31] <jordig> just like '_id'=>'$_id.str'?
[10:29:56] <jordig> the .str is it Javascript? I'm trying to use it on the aggregation $project phase
[10:30:09] <rspijker> jordig: yeh, it's javascript
[10:30:23] <rspijker> sorry, read over the aggregation part
[10:30:50] <Nodex> jordig : are you not printing it out in php afterwards?
[10:30:57] <Nodex> i/e looping your results?
[10:31:27] <jordig> I would like not to (so it's somehow more efficient as it all happens in the database).
[10:31:58] <jordig> I get this, which is not very nice,
[10:32:06] <jordig> ["_id"]=>
[10:32:07] <jordig> object(MongoId)#90 (1) {
[10:32:07] <jordig> ["$id"]=>
[10:32:08] <jordig> string(24) "5298d611c99dc7832f000000"
[10:32:10] <jordig> }
[10:32:13] <Nodex> use a pastebin
[10:32:48] <rspijker> well... where does the data go?
[10:33:03] <rspijker> it has to end up somewhere, can you not use the javascript .str function there to display it?
[10:33:54] <jordig> sorry, http://pastebin.com/qN8FxJzL
[10:34:20] <jordig> the data goes to an iPhone App
[10:34:47] <jordig> I know I can parse the results, but I thought it would be easy to use the $project features to format data this way
[11:19:36] <phrearch> hello
[11:20:11] <phrearch> i did a mongoexport of a collection, and was expecting it to be valid json, but it doesnt seem to be the case when i run it through JSON.parse
[11:20:36] <phrearch> not sure if this is related
[11:20:37] <phrearch> https://github.com/tiefenb/MongoExport-to-JSON
[11:20:53] <phrearch> is there a simple way to convert the output of mongoexport to 'real' json
[11:20:54] <phrearch> ?
[11:28:29] <rspijker> mongoexport *should* output real json
[11:29:27] <rspijker> it might do separate documents though... So that might not be considered 'valid' json
[11:29:51] <rspijker> you could use --jsonArray
[11:32:14] <phrearch> rspijker: it doesnt add ',' 's
[11:32:39] <phrearch> ill give that a try. thanks
[11:35:47] <rspijker> phrearch: yeah, if you use the default it will create separate lines for separate documents, no wrapping around it. With --jsonArray it puts them all in an array and separates them with commas
[11:36:17] <phrearch> rspijker: jsonArray works fine. thanks for the tip
[11:36:23] <rspijker> np
[11:39:42] <phrearch> hm, i made a dump of a collection which changed in the meanwhile
[11:40:28] <phrearch> aah, ill just import it locally with mongoimport and then export again
[11:40:31] <phrearch> :)
[13:11:46] <mylord> what’s wrong here? db.users.findAndModify({query:{dId:"dId_1"}}, {update:{$set:{balance:5}}}, {upsert:true}, {new:true})
[13:12:01] <mylord> failed: "need remove or update"
[13:14:27] <ctin> hi all!
[13:14:28] <Nodex> why have you got "update" in there
[13:15:12] <Nodex> db.users.findAndModify({dId : "dId_1" }, { $set : { balance : 5 } }, {upsert:true}, {new:true})
[13:17:25] <_boot> what db does a shard config server actually use for shard config data?
[13:24:10] <mylord> Nodex, thx.. how can i write that from node? ie, collection.findAndModify({"query":{"dId":"dId_1"}, {"update":{"$set":{"balance":5}}}, "upsert":true, "new":true});
[13:24:30] <Nodex> the base node driver?
[13:24:38] <mylord> oops, minus the “update"
[13:24:59] <mylord> Nodex, I suppose the base node driver, yes. I did npm install mongodb
[13:25:22] <Nodex> you do it the exact same way I put it
[13:25:30] <Nodex> https://github.com/mongodb/node-mongodb-native#find-and-modify
[13:25:41] <Nodex> db.collection('test').findAndModify({hello: 'world'}, [['_id','asc']], {$set: {hi: 'there'}}, {}, function(err, object) { .....
[13:39:40] <mylord> Nodex, this works from mongo shell: db.users.findAndModify({query:{dId:"test_1"}, update:{"$set":{balance:5}}, upsert:true, new:true})
[13:40:02] <mylord> I read that now you need to explicitly write update.. or did i misinterpret?
[13:40:54] <mylord> your line didn’t work for me; same prob: “need update or remove"
[13:41:09] <mylord> i’m now struggling to get my working line to work from node
[13:42:27] <Nodex> then I doubt you're using the node native driver
[13:45:58] <Nodex> db.users.findAndModify({dId : "dId_1" }, { $set : { balance : 5 } }, {upsert:true, new:true}) ...
[13:49:15] <mylord> Nodex: how do I check regarding use of native driver?
[13:49:27] <mylord> i have, at top, var mongo = require('mongodb');
[13:51:38] <Nodex> it's your app lol, not sure how I'm supposed to know what you're using
[13:52:33] <mylord> Nodex: https://gist.github.com/adaptivedev/cf294141dd6f3e1d3132
[13:53:50] <Nodex> doesn't really look anything like https://github.com/mongodb/node-mongodb-native#find-and-modify
[13:54:51] <Nodex> either way it's the wrong numnber of arguments
[13:55:10] <Nodex> collection.findAndModify({dId:"script_2"}, {"$set":{balance:5}}, {upsert:true, new:true}, function (err, result) {
[14:01:18] <mylord> Nodex: first.. thx for continued help.. I’d modified the line to what you gave: http://hastebin.com/qoqodoqeju.coffee
[14:02:15] <mylord> the output looks the same tho, and no records added to mongo.. here’s node console.log : http://hastebin.com/busabibagi.sm
[14:04:16] <Nodex> you didn;t use my line
[14:04:22] <Nodex> upsert:true, new:true}, <---- you're missing a {
[14:04:31] <Nodex> sorry,, negate that
[14:05:48] <Nodex> and sorry, I forgot to add a sory
[14:05:50] <Nodex> sort*
[14:06:26] <Nodex> collection.findAndModify({dId:"script_2"}, [['_id','asc']], {"$set":{balance:5}}, {upsert:true, new:true}, function (err, result) {
[14:06:28] <Nodex> try that
[14:08:35] <mylord> Nodex, I tried, meanwhile, just like example, afaik, but didn’t work: https://gist.github.com/adaptivedev/9955033
[14:09:54] <mylord> Nodex, yours worked! thx!
[14:12:37] <KamZou> Hi, is there a way to separate logfiles ? I'd like to have a logfile dedicated to errors/warnings
[14:22:37] <harryy> not natively, no
[14:22:46] <harryy> at least not that i know of
[15:51:37] <lagweezle> I'm hoping someone here might be able to help with my attempts at using mongokit. I'm trying to use inheritance and referring to the specific type of OAuth item as an attribute of the User class. I'm having a great deal of issue getting the OR bit to work as I'd expect. I've got https://gist.github.com/ncolton/9956763 to show what I'm attempting to do...
[15:51:47] <lagweezle> Well, and the errors I'm getting.
[15:51:52] <mylord> Nodex: how about this one: http://hastebin.com/tukusaroso.coffee
[15:52:07] <mylord> how comes the mongo shell works at bottom, but line 10 via node doesn’t work?
[15:54:21] <jackal_af> Anyone aware of a tool to help with data model development for Mongo?
[15:54:38] <mylord> Nodex: got it, was missing the toArray: collection.find(arg0, {safe:true}).toArray(function(err, result) {
[16:01:38] <mylord> Nodex, but its only returning the _id, not the entire document.. any ideas?
[16:04:03] <mylord> removing {safe:true} fixed that
[16:29:58] <richthegeek> I'm considering using Geo indexes for a game (2d procedural terrain + object/character position)... are there any limits of the lat/lng values? Like can I use integers for each sqaure (~2m*2m) and it will still work fine if the lat or lng is like 250,000?
[16:31:09] <Derick> richthegeek: only with a 2d index, where you can set (have to, in your case) the index bounds
[16:31:24] <Derick> http://docs.mongodb.org/manual/tutorial/build-a-2d-index/#define-location-range-for-a-2d-index
[16:33:01] <richthegeek> ok ... can the index parameters by changed on the fly later (probably not?)
[16:34:49] <richthegeek> I guess it's wether to treat the game world like Earth with standard coords like 53.203949 or if using integers would cause issues (which it seems like it wont)
[16:35:01] <richthegeek> eh, guess i'll do some actual real testing tonight
[16:39:44] <Derick> no, they can't be changed, but you can drop and recreate in the index :)
[16:40:03] <Derick> you can always scale by deviding everything by 1000...
[16:46:20] <Ryan_> Hey, I am using mapReduce and i was wondering if there is a way to not have keys with single values be in the array that gets returned? I am using finalize and just not returning the value if it isn't an array but it still stays in the returned array as null.
[16:47:43] <chet_> Hey there, I want to create a desktop application to compliment a website that uses mongodb.
[16:48:06] <chet_> What sort of database should I use for the desktop app?
[16:51:15] <_maes_> hi everyone. Is it safe to restore mongodb backups taken on 2.2 to mongodb 2.4?
[16:51:29] <_maes_> I can't find any comaptibility notes
[17:04:27] <Diplomat> hey guys
[17:04:30] <Diplomat> I have a weird problem
[17:04:37] <Diplomat> it takes tons of time to connect to my mongodb server
[17:04:43] <Diplomat> after the restart it works fine
[17:04:47] <Diplomat> but when i order like 20k rows
[17:04:51] <Diplomat> it goes slow again
[17:04:57] <Diplomat> takes like 7 seconds to return that much data
[17:08:01] <_maes_> Check disks performance, also make sure at least indexes are in memory
[17:09:01] <_maes_> your reads might be blocked by concurrent writes also
[17:09:20] <_maes_> you might consider enabling profiling for database
[17:15:16] <Diplomat> hmm
[17:15:27] <Diplomat> right now my dual core machine is maxing out all the time
[17:15:51] <Diplomat> I have around 3GB RAM free from 4GB and 2 cores are 100%
[17:16:04] <kali> how big is the database ? do you have the right index ?
[17:16:24] <Diplomat> I think I have because after restart everything comes up pretty fast
[17:16:30] <Diplomat> and database is mm
[17:16:50] <Diplomat> 1.953125GB
[17:17:18] <kali> check out what mongotop says when it goes slow
[17:19:11] <Diplomat> check this http://screencast.com/t/lOJSdpRKipC
[17:21:21] <kali> Diplomat: no surprise there ? that is the collection you're querying ?
[17:21:27] <kali> what about mongostats ?
[17:23:03] <Diplomat> http://puu.sh/7UBmo.png
[17:25:55] <kali> Diplomat: this is when you server is slow ? because this seems very fine
[17:26:10] <Diplomat> yeah it's pretty slow
[17:26:25] <Diplomat> http://puu.sh/7UBA3.png that's stable
[17:26:39] <Diplomat> it's dedicated to mongodb only
[17:34:41] <Diplomat> i should pretty much cry right now and hope it goes over ?
[17:37:56] <Diplomat> wtf now everything is fast again :|
[17:39:15] <Diplomat> and slow again :D
[17:39:41] <kali> what kind of server is that ?
[17:39:49] <Diplomat> cloud SSD server
[17:40:47] <Diplomat> well.. i'll move everything to a new server today
[17:40:51] <Diplomat> E3-1230 one
[17:40:56] <Diplomat> i guess it will have more power
[17:41:20] <Diplomat> well of course it has more power.. but i guess everything will work better there
[17:56:55] <kali> Diplomat: have you considered the external factors ? nothing else runs on the server ? you're testing from the server itself ?
[17:59:24] <Diplomat> yeah i checked other things too.. im moving both (web and mongo) to another server as soon as i receive them today
[17:59:56] <Diplomat> so i can analyse things.. because right now both servers are too weak to handle whatever i do
[18:11:24] <Diplomat> http://puu.sh/7UEDO.png looks better lol
[18:11:49] <Diplomat> but now I have to figure out how to transfer my database to there
[18:23:00] <lethjakman> hmmm...I was reading that mongo doesn't scale horizontally, if you try to shard it to different servers you can't use certain indexes. is that true? is there an easy solution for this?
[18:27:50] <cheeser> http://docs.mongodb.org/manual/core/sharding-shard-key-indexes/
[18:28:07] <cheeser> looks like only multikey indexes can't be used.
[18:34:55] <amergin> what would you recommend in this case: collection of documents where doc = { datasets: array, variables: array }. datasets is usually 1-5 in size (short strings), variables can have close to 100 items. Is there a reasonable structure that would allow me to 1) create an index for searching the right datasets & variables combination 2) allow the index to be unique.
[18:35:18] <amergin> as I understand, you cannot build an index for (array,array) combination
[18:43:48] <cornerman> hey, i have an unexpected problem using map/reduce + a finalize function with the out:reduce option.
[18:43:56] <cornerman> i try to do some kind of join over two collections:
[18:44:12] <cornerman> the first map/reduce-phase emits some objects from collection A to a temporary collection, and the second phase emits some objects from another collection B, which are then re-reduced into the previously filled temporary collection (out:reduce option).
[18:44:36] <cornerman> this works as expected until i try to use a finalize function in the second map/reduce phase.
[18:45:10] <cornerman> first, the reduce is correctly called on all freshly emitted values and the finalize function on every reduced value (including values with unique keys). second, the freshly finalized values are re-reduced with the objects in the temporary collection, but this time only the reduced values are finalized (the finalize function is not called for values with unique keys).
[18:45:40] <cornerman> i would have expected that every object is finalized in the last step, as this is the place, where i really know whether a join/grouping was effective. is this intended behaviour?
[18:46:08] <cornerman> as normally the finalize function is called on all object that are outputted
[18:48:41] <Diplomat> guys, backuping is as simple as "mongodump -d db -o my dir
[18:48:41] <Diplomat> ?
[18:50:26] <cryptojuice> anyone know if mongoose can tell me which attributes were updated in a post save method?
[18:53:07] <cheeser> not a mongoose user, but "save" generally means a full document write vs an update
[18:53:25] <cheeser> not sure if mongoose redefines those terms or not
[18:55:27] <cryptojuice> thanks. im trying to execute specific functionality when only certain attributes change in a post save method. maybe there's a better way?
[18:56:06] <cheeser> nothing off the top of my head.
[18:56:47] <cryptojuice> cool. thanks.
[19:01:39] <Diplomat> damn mongodb is coolest thing ever
[19:01:58] <cheeser> heh
[19:02:44] <Diplomat> seriously lol
[19:02:48] <Diplomat> im still exited
[19:02:54] <Diplomat> after a month or so lol
[19:03:43] <Aboba> you need a woman in your life
[19:03:44] <Aboba> ...
[19:04:21] <Diplomat> I have lol past 4 years already
[19:04:23] <Diplomat> or 5
[19:04:24] <Diplomat> hm
[19:04:37] <Diplomat> almost 5
[19:04:55] <Aboba> then go to her, do something romantic, and think about how you could store future ideas in mongodb
[19:05:37] <Diplomat> LOL
[19:05:57] <Diplomat> http://screencast.com/t/albAWPlAg E3-1230v2 server.. i wonder what's using so much PCU
[19:05:58] <Diplomat> CPU
[19:06:41] <Diplomat> and the answer is.. mongo of course
[19:06:42] <Diplomat> interesting
[19:07:25] <Diplomat> I wonder why mongo uses like.. almost all the cores
[19:18:31] <Diplomat> you guys have no idea why my mongo uses too much CPU ?
[19:20:26] <kali> Diplomat: mongo usually does not consume cpu. more like a memory hog
[19:22:23] <Diplomat> :/
[19:23:20] <Diplomat> check this out http://screencast.com/t/sbE6mUDwx
[19:23:25] <Diplomat> 750%
[19:26:28] <kali> can you have a look for slow queries in the log ?
[19:29:10] <Diplomat> can you tell me please where they are
[19:29:28] <kali> Diplomat: that depends on your configuration
[19:29:58] <Diplomat> I have no idea, I haven't touched anything
[19:30:54] <kali> well, look at your config file :P
[19:31:07] <Diplomat> logpath=/var/log/mongodb/mongodb.log
[19:31:08] <Diplomat> LOL
[19:31:10] <Diplomat> this one ?
[19:31:59] <Diplomat> wat, it saves all queries ?
[19:32:41] <kali> only the slow ones, unless you've tweak the profiler settings
[19:33:26] <Diplomat> hmm.. 806ms is slow ?
[19:34:05] <kali> yeah
[19:34:11] <Diplomat> http://pastebin.com/bVnGaaDB here i pasted tail
[19:34:46] <kali> mouhahah.
[19:34:52] <kali> you need indexes, man
[19:35:27] <Diplomat> LOL i yea
[19:35:30] <Diplomat> i just noticed i dont have them lol
[19:35:46] <Diplomat> well you know, you need to be dumb to become smart :D
[19:36:03] <Diplomat> but.. they are select queries
[19:36:06] <Diplomat> i think
[19:36:17] <kali> { visitor_id:1, project_id:1, _id: -1 }
[19:36:19] <kali> so what ?
[19:40:20] <Diplomat> nvm
[19:40:31] <Diplomat> im burned out and brain doesnt funcion as it should
[19:41:46] <Diplomat> i did
[19:41:46] <Diplomat> db.clicks.ensureIndex({"project_id":-1, "visitor_id":-1})
[19:41:47] <Diplomat> :D
[19:42:01] <kali> you want the _id in there too
[19:42:07] <Diplomat> ah ok
[19:42:15] <kali> at the end
[19:44:56] <Diplomat> hmm ok i cleared the log and i restarted mongo
[19:45:04] <Diplomat> and still stable 100% cpu usage
[19:45:18] <Diplomat> well 503% lol
[19:45:35] <kali> and what does the log says now ?
[19:45:52] <Diplomat> asme
[19:45:53] <Diplomat> same
[19:46:15] <kali> your collection is "visitor_click" no "click"
[19:46:24] <kali> or "clicks"
[19:46:37] <Diplomat> wait collection means database in mongo ?
[19:47:25] <Diplomat> db.visitor_clicks.ensureIndex({"visitor_id":-1, "project_id":-1, "_id":-1})
[19:47:32] <Diplomat> I added them like this now
[19:47:32] <Diplomat> :D
[19:47:33] <kali> no "s"
[19:47:47] <kali> in the log
[19:48:35] <Diplomat> oh god ..
[19:49:19] <Diplomat> ahahhaa
[19:49:29] <Diplomat> damn, thank god it's internet and nobody can see me
[19:49:44] <jaxyeh> is it possible to run indexes and search against multi-dimensional arrays?
[19:50:39] <Diplomat> LOL, cpu usage dropped like a rock
[19:50:50] <Diplomat> http://puu.sh/7UM51.png
[19:50:57] <Diplomat> thanks kali, you are my hero today
[19:51:24] <Diplomat> did you know your name "kali" means kvass in my language
[19:54:23] <kali> i don't know what kvass is :P
[19:54:32] <cheeser> i was gonna say... :)
[19:54:34] <kali> well, google does
[19:55:47] <Diplomat> :D
[19:56:19] <Diplomat> awesome
[19:56:25] <Diplomat> no queries in log anymore
[19:56:48] <kali> of course
[20:34:27] <TylerE> So what's the current state-of-the-art for real time analytics schemas, and querying same with variable granularity?
[22:23:10] <laprice> Got a little question regarding selectors in mongodb. I have a bunch of documents like https://dpaste.de/gSv2
[22:23:52] <laprice> I am trying to extract all those that have an entry for "dub" in the "tags" array.
[22:24:59] <laprice> db.testset.find({ tags: [ "dub", "100" ] },{artist: 1} ) gets me a list of documents. But only those for which the dub rating is 100.
[22:25:53] <laprice> I'm wanting to get all the documents that have a tag "dub" regardless of their rating.
[22:28:10] <laprice> From the docs it looks like $elemMatch only works in projections?
[22:31:16] <laprice> db.testset.find({ tags: [ "dub", "100" ] },{artist: 1, tags: { $elemMatch: { .0: "dub" } }} ) gets me the contents of the dub tag
[22:34:08] <laprice> db.testset.find({ tags: { $elemMatch: { .0: "dub" } },{artist: 1, tags: { $elemMatch: { .0: "dub" } }} ) //does what I want.
[22:34:23] <laprice> Thanks Rubber Ducky.
[22:41:40] <future28> Hi there, is there a way that I can retrieve only the latest entry to the profiler?
[22:41:42] <future28> I'm new to mdb
[22:52:19] <nmschulte> Hi all. Does anyone know why the build for r2.6* tags takes so much longer than the r2.5* tags?
[22:53:38] <nmschulte> When I use the -j option with scons, each ld process takes up nearly 2GB of RAM which leads to swap heaven...