[00:02:58] <venturecommunist> if i have a field where values of the field might be A1, A2, A3, A4, etc. is there a way to do a $lt or $gt query that ignores the letter A?
[00:03:21] <venturecommunist> in other words, clearly if i had a field that's like 1, 2, 3 i can do less than and greater than, but what if the field is predictable but has an A in it
[00:18:34] <venturecommunist> wow looks like that worked out of the box actually. cool
[01:14:13] <Pinkamena_D> How can I actually use the data gathered from an aggregation in the mongo shell? for example drop all of the matched documents?
[01:20:55] <crudson> Pinkamena_D: just iterate over the returned document values
[01:41:48] <cheeser> not sure why you'd use aggregation for that rather than, say, findAndModify()
[08:34:18] <Carter_> Is it running javascript function to perform multiple quries and return the resutl to client in mongo shell faster than issue multi queries from client?
[08:41:20] <kali> du you mean "multiple queries" or "update queries with the "multi" flag set" ?
[09:58:07] <wilcovd> Hello all. I see some strange activity on the cluster, that I can't track down to a specific problem.
[09:58:15] <wilcovd> If I do db.currentOp() I see ops running multiple seconds
[09:58:31] <wilcovd> however, the slow query log does show response times of ± 200 ms
[09:59:21] <wilcovd> what does the "secs_running" measure in the currentOp() command, versus the millis field in the query log?
[10:18:34] <oceanx> hi guys I have a question, I have one mongodb database, I need to migrate to a new machine with as little downtime as possible, I can modify my application to push data coming in from the web application into the old and the new database togheter, but what's the best way to move old data? (would copy/clone work merging the two? or have I got to do something else?)
[10:19:01] <oceanx> thanks to everyone that would help :)
[10:21:47] <kizzx2> oceanx: you may set up the second one as a replica and then do a switch over of primary
[14:25:55] <Nomikos> wouldn't you run out of processes/threads anyway?
[14:28:47] <bertodsera> why would you? Only a very small amount of those DBs (like maybe 100) are used at exactly the same time. It would be like one db per customer.
[14:29:03] <bertodsera> so a customer locks HIS stuff, and that stuff only
[14:30:54] <Nomikos> I'm pretty new to MongoDB, but it feels "wrong". You'd have to tweak a bunch of settings; Mongo will reserve hundreds of MB for any new db and/or collection, times a million is lots. There's a max number of open files you can have. Lots of overhead IYAM
[14:31:36] <Nomikos> if lock contention becomes a big problem maybe have another look at your application's implementation?
[14:31:48] <bertodsera> oh, does that mean we have an open file for each db anyway? In that case, yes, pretty much end game
[14:32:15] <bertodsera> I would have expected a file to be there after a connection set a USE for it, not anyway
[14:33:17] <Nomikos> but mongo is built for speed, at the expense of ram & disk space, and other resources too probably
[14:33:48] <Nomikos> /is/ lock contention a problem atm or are you just looking for potential solutions if it becomes one?
[14:35:47] <bertodsera> I have hundreds of processes sitting out there, waiting to access some info and change it. Any customer can grab a process and use it. It takes fractions of a second to serve a request, but since the writing lock appears to be PER DB, I am looking for a way to remove a clear incoming bottleneck
[14:36:37] <bertodsera> so what I was thinking to do, is to use a key value (the customer id) as a DB name
[14:36:59] <bertodsera> how could it, if the lock is for db? Or am I missing something?
[14:37:37] <Nomikos> the mongod instances are independant, a write lock on one does not affect the others
[14:38:36] <bertodsera> MongoDB uses a readers-writer [1] lock that allows concurrent reads access to a database but gives exclusive access to a single write operation. (from the docs)
[14:38:52] <bertodsera> so I guess I am misunderstanding something
[14:39:01] <Nomikos> "Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances."
[14:41:05] <Nomikos> but if you use separate machines, the nr of processes should increase at least enough to serve a number of customers at the same time?
[14:41:38] <bertodsera> Nopes, I have money for ONE server, and that is it
[14:41:46] <bertodsera> will use a traditional RDBMs
[14:42:17] <bertodsera> there does not seems to be a way to get out of this in MOngo
[16:13:57] <mst1228> could someone look at this setup for me? I'm having a hard time grasping some concepts, and what I'm attempting to do might not be the best way or even possible
[16:17:57] <mst1228> I had left out some other properties that aren't important for what i'm trying to do, i can remove them from this to make it easier to read
[16:20:10] <tripflex> so you want to update the user document when someone else (another user with another user doc) adds them
[16:49:12] <likarish> so the document has a list of subdocuments. I need a query that returns every document with name == some_name whose list contains an some_elem.
[17:11:49] <leifw> when I go to querymongo.com, there's an example SQL statement typed in for me already, I click "translate" and it does the translation
[17:12:04] <leifw> the end of the reduce function looks bizarre
[17:12:18] <leifw> and also kind of needs brackets around the if blocks
[17:12:50] <likarish> tripflex: that query gets close, however in the results I'm seeing docs with field1.name and field1.list are matched in separate subdocuments. I want something that matches the name and element and both are in the same subdocument.
[17:14:34] <likarish> is there some way to require that only match for subdocuments that fulfill with name == some_name and also contain an elem in their list?
[17:15:21] <likarish> I tried: find(type: some_type, field1: {name: some_name, list: some_elem}) but that didn't seem to work.
[17:44:41] <chaotic_good> so ms sql days are numbered due to mongo?
[17:45:56] <tkeith> I didn't realize mongodb's configuration is insecure by default (allows no-auth connections from anywhere). It's been sitting like this for a while, but with no valuable data stored in mongodb. Is it possible that an attacker used my open mongodb server to cause any damage outside mongodb (access other files, run commands as the mongodb user, etc)?
[17:53:37] <jyee> tkeith: not likely… practically impossible,
[17:54:19] <jyee> tkeith: connection to your DB was no-auth, but your OS should still require login and perform security for the mongodb user in the shell
[17:55:02] <jyee> generally the biggest issue you would have faced had someone connected to your db would be stolen data.
[17:55:36] <jyee> the second would have been if they filled up your mongo to take all disk space and try to crash your server.
[17:58:12] <tkeith> jyee: There was no worthwhile data, and I see no evidence that anyone created any. The server certainly didn't crash. I'm just making sure mongodb doesn't have a built-in way to execute commands on the OS.
[18:15:13] <chaotic_good> with basic non shar3ed setup the master only commit to disk and slaves what hold in ram?