PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 18th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:58] <venturecommunist> if i have a field where values of the field might be A1, A2, A3, A4, etc. is there a way to do a $lt or $gt query that ignores the letter A?
[00:03:21] <venturecommunist> in other words, clearly if i had a field that's like 1, 2, 3 i can do less than and greater than, but what if the field is predictable but has an A in it
[00:18:34] <venturecommunist> wow looks like that worked out of the box actually. cool
[01:14:13] <Pinkamena_D> How can I actually use the data gathered from an aggregation in the mongo shell? for example drop all of the matched documents?
[01:20:55] <crudson> Pinkamena_D: just iterate over the returned document values
[01:41:48] <cheeser> not sure why you'd use aggregation for that rather than, say, findAndModify()
[01:42:02] <cheeser> or just remove(), of course
[04:32:55] <teechap> anyone use flask and mongoengine?
[07:29:01] <[AD]Turbo> hola
[08:32:31] <Carter_> Hi everyone
[08:34:18] <Carter_> Is it running javascript function to perform multiple quries and return the resutl to client in mongo shell faster than issue multi queries from client?
[08:41:20] <kali> du you mean "multiple queries" or "update queries with the "multi" flag set" ?
[09:58:07] <wilcovd> Hello all. I see some strange activity on the cluster, that I can't track down to a specific problem.
[09:58:15] <wilcovd> If I do db.currentOp() I see ops running multiple seconds
[09:58:31] <wilcovd> however, the slow query log does show response times of ± 200 ms
[09:59:21] <wilcovd> what does the "secs_running" measure in the currentOp() command, versus the millis field in the query log?
[10:18:34] <oceanx> hi guys I have a question, I have one mongodb database, I need to migrate to a new machine with as little downtime as possible, I can modify my application to push data coming in from the web application into the old and the new database togheter, but what's the best way to move old data? (would copy/clone work merging the two? or have I got to do something else?)
[10:19:01] <oceanx> thanks to everyone that would help :)
[10:21:47] <kizzx2> oceanx: you may set up the second one as a replica and then do a switch over of primary
[10:23:38] <oceanx> thanks :) I'll try
[10:37:06] <Nodex> or set it up as a slave :)
[13:48:16] <merrihew> Can you specify that mongod should bind to a specific interface (eth1) instead of it's specific IP address?
[14:22:40] <bertodsera> Is it feasible to have millions of very small databases, to avoid lock contention?
[14:25:22] <Nomikos> my gut feeling says "No"
[14:25:55] <Nomikos> wouldn't you run out of processes/threads anyway?
[14:28:47] <bertodsera> why would you? Only a very small amount of those DBs (like maybe 100) are used at exactly the same time. It would be like one db per customer.
[14:29:03] <bertodsera> so a customer locks HIS stuff, and that stuff only
[14:30:54] <Nomikos> I'm pretty new to MongoDB, but it feels "wrong". You'd have to tweak a bunch of settings; Mongo will reserve hundreds of MB for any new db and/or collection, times a million is lots. There's a max number of open files you can have. Lots of overhead IYAM
[14:31:36] <Nomikos> if lock contention becomes a big problem maybe have another look at your application's implementation?
[14:31:48] <bertodsera> oh, does that mean we have an open file for each db anyway? In that case, yes, pretty much end game
[14:32:15] <bertodsera> I would have expected a file to be there after a connection set a USE for it, not anyway
[14:32:25] <bertodsera> *an open file
[14:32:45] <Nomikos> I'm not sure about this tbh
[14:33:17] <Nomikos> but mongo is built for speed, at the expense of ram & disk space, and other resources too probably
[14:33:48] <Nomikos> /is/ lock contention a problem atm or are you just looking for potential solutions if it becomes one?
[14:35:47] <bertodsera> I have hundreds of processes sitting out there, waiting to access some info and change it. Any customer can grab a process and use it. It takes fractions of a second to serve a request, but since the writing lock appears to be PER DB, I am looking for a way to remove a clear incoming bottleneck
[14:36:33] <Nomikos> would sharding help?
[14:36:37] <bertodsera> so what I was thinking to do, is to use a key value (the customer id) as a DB name
[14:36:59] <bertodsera> how could it, if the lock is for db? Or am I missing something?
[14:37:37] <Nomikos> the mongod instances are independant, a write lock on one does not affect the others
[14:38:36] <bertodsera> MongoDB uses a readers-writer [1] lock that allows concurrent reads access to a database but gives exclusive access to a single write operation. (from the docs)
[14:38:52] <bertodsera> so I guess I am misunderstanding something
[14:39:01] <Nomikos> "Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances."
[14:39:13] <bertodsera> that is what :) Thanks!
[14:39:39] <Nomikos> from http://docs.mongodb.org/manual/faq/concurrency/ - don't take my word for it :-) I'm pretty new to this
[14:39:43] <bertodsera> except it does not help much
[14:39:56] <Nomikos> ah
[14:39:57] <bertodsera> I don't have millions of processes I can use
[14:40:10] <bertodsera> we are back to square 1
[14:41:05] <Nomikos> but if you use separate machines, the nr of processes should increase at least enough to serve a number of customers at the same time?
[14:41:38] <bertodsera> Nopes, I have money for ONE server, and that is it
[14:41:46] <bertodsera> will use a traditional RDBMs
[14:42:17] <bertodsera> there does not seems to be a way to get out of this in MOngo
[14:43:34] <bertodsera> we go Cassandra
[14:43:37] <bertodsera> thanks :)
[14:43:57] <Nomikos> uhm, you're welcome. I hope next time you find someone more knowledgeable :)
[14:48:07] <bertodsera> sadly I don't have much time to make a decision, but wi shall keep an eye on the issue
[14:59:29] <Nomikos> See, I had no idea that was posisble
[14:59:32] <Nomikos> *possible
[15:03:34] <Nomikos> unless it's with VMs or somesuch?
[15:26:14] <BurtyB> Nomikos, not using VMs - I have 4 shards on rfc1918 IPs on a dummy interface and mongos is on the external IP.
[15:26:58] <Derick> BurtyB: use different ports instead?
[15:27:25] <BurtyB> Derick, they was physical servers so to save reconfiguring I just used local IPs and configured the dns
[15:51:11] <mst1228> is it possible to have a unique index in a nested array of objects, but the unique index only applies to that array?
[15:51:20] <Derick> no
[15:51:25] <Derick> uniqueness is per document only
[15:51:36] <mst1228> ok word, thanks a bunch
[16:13:57] <mst1228> could someone look at this setup for me? I'm having a hard time grasping some concepts, and what I'm attempting to do might not be the best way or even possible
[16:13:58] <mst1228> https://gist.github.com/thorsonmscott/581a3d4b88c97b30b384
[16:14:04] <mst1228> more explanation in the gist
[16:15:22] <cheeser> you could use $addToSet so long as what you're trying to add is truly unique.
[16:15:32] <cheeser> the dates there might ruin that for you.
[16:15:32] <tripflex> mst1228: so just do an update when the user adds a tag
[16:15:47] <tripflex> what are you using to write the code in
[16:16:01] <mst1228> its a node app, javascript
[16:16:13] <mst1228> thats what i want to do, but i can't seem to form that query
[16:16:24] <tripflex> are you using mongoose?
[16:16:26] <mst1228> yes
[16:16:35] <mst1228> do you want to see that Schema definition?
[16:16:38] <tripflex> yea
[16:17:34] <mst1228> i updated the gist
[16:17:57] <mst1228> I had left out some other properties that aren't important for what i'm trying to do, i can remove them from this to make it easier to read
[16:20:10] <tripflex> so you want to update the user document when someone else (another user with another user doc) adds them
[16:20:10] <tripflex> right
[16:20:18] <mst1228> yes
[16:20:28] <mst1228> well
[16:20:46] <mst1228> i writing a little internal API for our organization
[16:21:06] <mst1228> so this idea is that someone could apply a 'tag' to someone else through this API
[16:21:35] <mst1228> for what i'm doing, i'm not really concerned that the other user has their own doc or not
[16:22:05] <mst1228> let me post the function where i'm trying to update this, might help
[16:25:32] <mst1228> i posted that, it's obviously not complete, that's where i got stuck
[16:26:33] <mst1228> i was playing around in the console trying to find the write combination of operators
[16:28:06] <mst1228> also, as someone mentioned before, if the dates are going to be difficult, they are not too terribly important
[16:41:58] <meadhikari> i have a novie question, how would i replicate something like UNIQUE of mysql; not just on '_id' but on other field too
[16:42:23] <Derick> meadhikari: you can create a unique index
[16:42:29] <mst1228> http://docs.mongodb.org/manual/tutorial/create-a-unique-index/
[16:42:31] <Derick> it's an option to ensureIndex
[16:47:19] <likarish> I have a collection that has documents like {type: <type>, {name: <name>, list: [<list elems>]}}
[16:48:15] <likarish> actually like {type: <type>, [{name: <name>, list: [<list elems>]}]}
[16:49:12] <likarish> so the document has a list of subdocuments. I need a query that returns every document with name == some_name whose list contains an some_elem.
[16:49:57] <tripflex> likarish: http://docs.mongodb.org/manual/tutorial/query-documents/
[16:50:01] <likarish> right messed up the example, should be: {type: <type>, field1: [{name: <name>, list: [<list elems>]}]}
[16:51:44] <meadhikari> thanks Derick, mst1228 got it working
[16:52:47] <tripflex> meadhikari: here's how
[16:52:49] <tripflex> http://www.querymongo.com/
[16:52:59] <tripflex> :P
[16:53:11] <tripflex> someone should add that to topic
[16:53:18] <tripflex> mysql questions get asked pretty frequently
[16:53:24] <meadhikari> tripflex, great link thanks
[16:55:13] <likarish> tried find({type: my_type, field1: {name: some_name, list: some_elem}}) but that doesn't work
[16:56:19] <tripflex> http://docs.mongodb.org/manual/reference/operator/projection/positional/#proj._S_
[16:59:57] <likarish> thanks for the links, tripflex
[17:01:57] <tripflex> ;-)
[17:02:57] <tripflex> yeah so, find({type: my_type, field1.name: name, list: some_elem})
[17:03:04] <tripflex> whoops
[17:03:08] <tripflex> field1.list ^
[17:03:15] <tripflex> then you reference it using the positional operator
[17:04:00] <meadhikari> tripflex, if you maintain http://www.querymongo.com/ can u please make the text area larger :)
[17:04:55] <tripflex> so, find({type: my_type, field1.name: name, field1.list: some_elem}, { field1.$: 1 })
[17:04:57] <tripflex> something like that
[17:05:04] <tripflex> nah i dont
[17:05:09] <tripflex> just use it frequently :P
[17:10:56] <leifw> tripflex: in the default example, why is there "if (true != null) ..."
[17:11:19] <tripflex> huh?
[17:11:37] <tripflex> what example?
[17:11:49] <leifw> when I go to querymongo.com, there's an example SQL statement typed in for me already, I click "translate" and it does the translation
[17:12:04] <leifw> the end of the reduce function looks bizarre
[17:12:18] <leifw> and also kind of needs brackets around the if blocks
[17:12:50] <likarish> tripflex: that query gets close, however in the results I'm seeing docs with field1.name and field1.list are matched in separate subdocuments. I want something that matches the name and element and both are in the same subdocument.
[17:12:59] <tripflex> its not going to be 1 for 1
[17:13:06] <tripflex> but it will help you to build the query yourself
[17:13:12] <tripflex> and no, it's not my site
[17:14:34] <likarish> is there some way to require that only match for subdocuments that fulfill with name == some_name and also contain an elem in their list?
[17:15:21] <likarish> I tried: find(type: some_type, field1: {name: some_name, list: some_elem}) but that didn't seem to work.
[17:16:01] <tripflex> conditional operators
[17:16:24] <tripflex> http://docs.mongodb.org/manual/reference/operator/query/
[17:16:36] <tripflex> logical operators i mean
[17:22:15] <saml> how do i copy db?
[17:22:21] <saml> from production to my machine so that i can drop it
[17:25:03] <likarish> saml: look at http://docs.mongodb.org/manual/core/import-export/
[17:25:12] <saml> db.copyDatabase
[17:29:09] <likarish> tripflex: Great, that worked. Thanks for your help.
[17:39:57] <tripflex> no problem
[17:44:41] <chaotic_good> so ms sql days are numbered due to mongo?
[17:45:56] <tkeith> I didn't realize mongodb's configuration is insecure by default (allows no-auth connections from anywhere). It's been sitting like this for a while, but with no valuable data stored in mongodb. Is it possible that an attacker used my open mongodb server to cause any damage outside mongodb (access other files, run commands as the mongodb user, etc)?
[17:53:18] <tripflex> yuck ms sql
[17:53:37] <jyee> tkeith: not likely… practically impossible,
[17:54:19] <jyee> tkeith: connection to your DB was no-auth, but your OS should still require login and perform security for the mongodb user in the shell
[17:55:02] <jyee> generally the biggest issue you would have faced had someone connected to your db would be stolen data.
[17:55:36] <jyee> the second would have been if they filled up your mongo to take all disk space and try to crash your server.
[17:58:12] <tkeith> jyee: There was no worthwhile data, and I see no evidence that anyone created any. The server certainly didn't crash. I'm just making sure mongodb doesn't have a built-in way to execute commands on the OS.
[18:15:13] <chaotic_good> with basic non shar3ed setup the master only commit to disk and slaves what hold in ram?
[18:15:48] <cheeser> what?
[18:16:06] <cheeser> you have master/slave or primary/secondary?
[18:17:34] <chaotic_good> 1 master 2 salve
[18:17:38] <chaotic_good> basic no shards
[18:17:53] <chaotic_good> liek almost basic defautl config except for smallfiles=true
[18:18:41] <cheeser> why are you using that? replica sets have replaced master-slave
[18:23:55] <chaotic_good> yeah
[18:23:57] <chaotic_good> sorry
[18:24:00] <chaotic_good> its a replica set
[18:24:03] <chaotic_good> 3 node
[18:24:12] <chaotic_good> but node 1 seems to do all the disk writing
[18:24:17] <chaotic_good> has some io wait
[18:24:22] <chaotic_good> while other 2 dont do much
[18:24:23] <chaotic_good> hmm
[18:24:28] <chaotic_good> is this way its supposed?
[18:38:25] <cheeser> they should replicate reasonably soon after the primary writes.
[18:51:26] <chaotic_good> I think www.prevayler.org might be what mongo wants to be when it grows up
[19:16:27] <cheeser> chaotic_good: nice troll but prevayler has been around for at least a decade and it's gone nowhere.
[21:16:29] <pior> Am I the only one having trouble with the APT repository ?
[21:27:31] <stormbrew> prior: you need to change the dist field to 10gen
[21:27:36] <stormbrew> er, from 10gen to mongodb
[21:27:49] <stormbrew> and components field, sorry
[21:28:04] <stormbrew> they changed it and haven't updated the documentation
[22:20:21] <stormbrew> pior: and now thy changed it back
[22:20:52] <pior> stormbrew, thx!