[05:42:35] <joannac> well, no, you don't have to. But indexes will get you better performance than collection scans
[05:43:08] <sahanh> so with indexes defined, AF is the way to go?
[08:06:59] <BaNzounet> Hey guys, if I've a doc multiple keys and I run an update with only one key in the update section will it override other field or will it just update the matching field?
[08:07:41] <BaNzounet> by override I mean delete other keys (fields)
[08:39:35] <kali> BaNzounet: you neet to look up "$set"
[08:53:48] <ahawkins_> hey everyone. I have an array of strings, I need to query documents where an entry matches a regex. I used $in with $regex before but newer mongo versions are complaining about that. Any other way to construct this query?
[08:55:19] <kali> ahawkins_: can't you build the "or" in the regexp ? (regexp1|regexp2|regexp3|...)
[08:56:12] <ahawkins_> kali: there is nothing to or against. I don't know what's in the array.
[08:56:25] <rspijker> ahawkins_: what is it you are trying to do exactly?
[08:57:45] <ahawkins_> rspijker: given a document with an property "things" [foo, bar, baz], find all documents where a member of "things" matches /b/
[08:59:35] <kali> ahawkins_: sorry, i misunderstood the question :)
[09:00:01] <ollivera> Hi, is there any way to check the last time my collection has been updated? I tried db.getLastErrorObj()
[09:00:19] <rspijker> say "things" is a regular string, either "foor", "bar", or "baz". Then you can use $in:["foo", "bar", "baz"] to find any document that has things either one of those values
[09:00:34] <kali> ollivera: not trivially (you can look at the oplog if you have ha replica set, but it's a bit hackish)
[09:00:51] <rspijker> also, your oplog might not go back that far
[09:02:04] <rspijker> if this is something you want to do routinely, you need to keep track of it yourself ollivera
[09:03:09] <rspijker> and it wouldn't work for other modifications
[10:55:08] <BaNzounet> Is there anythings wrong with this ` db.bars.update({installedOnce: true}, {$set: {installedOne: false}}, false, false);`? Cause it doesn't update my doc
[11:37:50] <rspijker> aaah, does smallfiles impact it then?
[11:38:17] <rspijker> all I can recall is that it limits the max size to 512, don't know if it also makes inital files smaller
[11:39:55] <oskie> rspijker: I don't think the files are the issue - I have like 100 databases, each with 16 or so collections with "size" : 0 and "storageSize" : 8192...
[11:42:56] <rspijker> I can't find any documentation that explains what those settings do *exactly*, only somewhat vaguely
[11:43:26] <rspijker> I remember there being a video seminar that went into mongo storage mecahnisms into a little more depth, but that was a while ago
[12:06:12] <kali> smallfiles reduce the size of the preallocation chunks from 2GB to 128M iirc. it's a must for integration tests
[12:06:56] <kali> noprealloc is good when running on filesystem that do not support preallocation (can be usefull for tests, too)
[12:07:12] <kali> but i would not consider these two tweaks to be suitable for production environment
[12:18:03] <oskie> i thought i read somewhere that preallocated files are NOT sparse
[13:14:09] <Industrial> Hi. I have several collections that represent time based data. Right now they all have one property 't' which is a Date object (so down to the millisecond).
[13:14:35] <Industrial> Is it possible to merge several collections (with the same format) based on one property, also sorted by that property?
[13:19:39] <kali> Industrial: you may be able to do it with multiple map reduce
[14:18:03] <mac_cosmo> hey! i’m new to mongodb and can’t get my head around basics things … let’s assume, that i have an old projekt with a psql/mysql db, like this: http://pastie.org/9133802 what would be a good design when i have to redo this in a mongo db?
[14:22:17] <mac_cosmo> (in a real world example an user would have ~20 memberships)
[14:39:53] <mac_cosmo> 16 mb should be enough per party ;)
[14:41:15] <artimuscf> hi, im trying to write a .js script to clone a database. it works fine, but I would like to add some logic so I know if it was successful or not. I currently use var out = db.cloneDatabase('somehost'), but out is an object that im not sure how to parse
[14:41:19] <Nodex> in the future you can always chunk it
[14:42:24] <artimuscf> so basically i just do db.dropDatabase(), then db.cloneDatabase(). i just want to add some error handling. can anyone point me in the right direction
[14:47:43] <emr> Hello how i can find object which have array list and contains item? its something like { "_id" : "37a6259cc0c1dae299a7866489dff0bd", "fruits" : [ "apple", "banana" ] }
[14:48:14] <emr> i want to find all items which apple item in fruits
[14:48:17] <rspijker> you can just act as if fruits is a simple field containing a string you want to match against emr
[15:43:48] <ekristen> I’m looking at one of my currentOps on a replica
[15:43:57] <ekristen> and a query has been running for 98 minutes for some reason
[15:44:14] <ekristen> I’m afraid this isn’t the first time this has happened — trying to figure out why
[15:48:15] <ekristen> is it possible for an index to be unhealthy?
[15:54:05] <jbrechtel> I've searched a bit and haven't found anything....has anyone seen anything done around rollback strategies for failed document schema migrations when using a replicaset?
[15:55:46] <jbrechtel> I'm looking for ideas around what to do (either automate or plan to do) if we do a document schema migration and decide it has failed and we want to rollback. We've considered taking a member out of the replicaset and subsequently building a new replicaset around that in the event of a failure. This requires two application configuration updates though. Looking for possibly simpler approaches.
[16:25:10] <ekristen> I’m having what appears to be issues with one of my replica’s secondaries, certain queries seem to get stuck and will run for hours
[16:25:15] <ekristen> when they should return almost immediately
[16:25:30] <ekristen> I’m having to kill off the operations so they don’t overwhelm the server
[16:25:50] <ekristen> is there anyway to troubleshoot why this is happening? it doesn’t seem to be happening on my primary and/or my other secondary
[16:32:19] <magglass2> ekristen: have you checked to make sure any indexes required by the query are in place? compare the indexes to those on your master
[16:34:10] <ekristen> is it possible for indexes to get corrupt?
[16:43:49] <dfinch> I'm using a third party component which creates tons of collections implicitly, I want to ensure these are configured correctly and indexes are added when they are created. my plan is to edit the library's calls to update/etc. to insert some step to do this. this will presumably have to be done on every write operation, so what's the best way to do this so I don't seriously affect performance of the queries
[16:43:50] <dfinch> themselves, and avoid multiple round trips between client and server?
[16:46:58] <magglass2> ekristen: can you run the query with a .explain() and compare it to the master?
[16:52:51] <magglass2> ekristen: takes too long to run or what?
[16:53:09] <ekristen> really don’t want to risk killing my production servers again
[17:21:17] <unholycrab> Is it possible to create multiple groups within MongoDB MMS?
[17:21:36] <unholycrab> I want to monitor multiple clusters that cannot reach eachother, and so cannot be monitored from a single mongodb-mms-monitoring-agent
[17:39:18] <daidoji> hello, how do I know if I'm running into memory limits or whatever with MapReduce in Mongo?
[17:39:37] <daidoji> alternatively, here's some output explaining an issue I'm having that I can't seem to figure out https://gist.github.com/daidoji/1ef55617e69553afc4f8
[17:39:49] <daidoji> any help would be greatly appreciated
[17:46:07] <daidoji> also note, that second bit of output the count is correct, there are three documents with that type not two as in the first set of output
[18:04:48] <pure> So, considering a sharding use case whereby the users of the service can host either shards themselves or redundancy shards. Would there be a nice way to make sure the sharded databases are somehow encrypted from the host but still able to be used by the rest of the DB network?
[18:14:30] <pure> On another note: is having redundancy clusters a good idea for sharded systems?
[18:15:26] <magglass2> pure: not sure what you're asking; typically each shard would be a three-node replica set
[18:15:32] <cheeser> well, every shard should have its own replica set...
[18:16:17] <pure> I mean, each shard in the database should have its own redundancy databases thingy?
[18:35:05] <bemu> Fri May 2 18:33:49 TypeError: cfg.members.find is not a function (shell):1
[18:36:05] <daidoji> bemu: where does cfg come from? I've never done sharding, but empty [] is acceptable output for splice when there's nothing at that index
[18:37:08] <bemu> cfg is a full array of information, including members.
[18:37:23] <daidoji> also, in case anyone could help with my issue above, its now also a Google Groups post for mongodb-users https://groups.google.com/forum/#!topic/mongodb-user/C68gT2kn7O8
[18:37:47] <bemu> each member has an _id, host, etc.
[18:39:23] <mbeacom> Hey guys, have a quick question.. well, hopefully quick.
[18:39:39] <mbeacom> I have a collection structured like this
[18:39:54] <daidoji> bemu: isn't splice a javascript function anyways? I don't think its a Mongo specific thing
[18:39:54] <q851> daidoji: are you getting my messages?
[18:47:50] <daidoji> so there's nothing at [4] in that array
[18:48:01] <daidoji> https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice here's the docs on that function though
[18:51:09] <daidoji> Mongo docs assume you're familiar with Javascript though in case you run into other things that don't seem to have documentation in Mongodocs proper
[19:42:30] <mbeacom> Can you limit subdocuments in a aggregation?
[19:43:22] <mbeacom> That is my current aggregation
[19:43:40] <mbeacom> I want to limit the "messages" subdocument to 2 at a time
[19:44:43] <q851> mbeacom: I only know of http://docs.mongodb.org/manual/reference/operator/query/elemMatch/ to limit subdocuments.
[19:51:41] <mbeacom> q851: Ah, thanks. I don't think that is what I'm looking for though :(
[19:52:49] <q851> Why do you need to limit it to two at a time?
[19:53:01] <mbeacom> q851: To limit the subdocuments it returns, would you just let it return what it is going to return and then programmatically add htem to an array
[19:53:14] <mbeacom> I have threads.. some could have 1000 messages in them
[19:53:23] <mbeacom> I want to grab 15 before a certain timestmap
[19:53:45] <mbeacom> q851: so I can make a call to the server, grab 15 before a timestamp. then if the user scrolls up more, grab another 15 before a timestamp
[22:22:44] <frodo_baggins> Hmm... I have an interesting problem to solve.
[22:22:48] <frodo_baggins> Interesting for me anyway. :D
[22:40:50] <frodo_baggins> Which brings me to my question... is there a way to have timed tasks occur within mongo, without using crontab?
[23:17:58] <frodo_baggins> Hmm... crontab scripts, or timed mongodb scripts?
[23:19:22] <frodo_baggins> Hmm, I just thought of a better idea. :D
[23:20:03] <frodo_baggins> But, that brings me to another question...
[23:20:42] <frodo_baggins> Should I create a collection called application, and then a document for settings? Or should I add a collection under system called application_settings?
[23:21:00] <frodo_baggins> Or something along those lines.
[23:21:37] <frodo_baggins> Okay, so, documentation is telling me not to create collections with the system.* prefix.
[23:22:10] <frodo_baggins> So, I guess I'll create a collection called application. :D
[23:30:46] <frodo_baggins> Another question... can $in be used with findAndModify?
[23:31:08] <frodo_baggins> I want to use findAndModify to query for the presence of an element within an Array on one document.
[23:31:18] <frodo_baggins> And if it's not there, add the element.
[23:32:12] <frodo_baggins> Ah, I see it is $addToSet I'm looking for.
[23:49:02] <frodo_baggins> Is there a way to tell if an update operation ends up inserting a new document?
[23:50:04] <frodo_baggins> I would like to be able to call a method, or at least specify a conditional $addToSet on the event a new document is inserted.
[23:50:56] <ranman> frodo_baggins: maybe findAndModify with upsert is what you want?
[23:51:10] <frodo_baggins> I am using a findAndModify call.
[23:51:26] <frodo_baggins> And specifying an update parameter.
[23:51:51] <ranman> frodo_baggins: so the callback has the updated document