PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 12th of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:35:47] <motaka2> hello I get this error while trying to use genghis: failed to connect to localhost 27017
[06:35:55] <motaka2> can anyone see my messages
[06:35:56] <motaka2> ?
[06:36:10] <jkitchen> nope
[06:36:21] <motaka2> jkitchen: :)
[06:36:43] <motaka2> thought I am not allowed in this channel and need some special login
[06:36:55] <motaka2> so do you know why this happens ?
[06:36:59] <jkitchen> what would make you think that?
[06:37:20] <motaka2> jkitchen: Got a message telling me to login
[06:37:24] <jkitchen> also, it could happen because mongo is not running on port 27017
[06:37:47] <motaka2> jkitchen: how can I check the port ?
[06:38:04] <jkitchen> assuming a linux system: netstat -lnp | grep mongo
[09:01:51] <robertjpayne> Is it safe to pass client side query params to find() and findOne()? Is there any worry of injection?
[09:04:24] <Nodex> no, only update / save / writes carry possible security holes
[09:04:27] <kali> not much, as long as you're not crafting $where or map/reduce javascript
[09:06:28] <robertjpayne> I'll have to check how it's implemented
[09:06:37] <robertjpayne> but I believe my driver does not craft javascript queries
[09:06:48] <robertjpayne> not without explicit request from my code :)
[09:06:52] <kali> yeah
[09:07:01] <kali> it would be very counter productive :)
[09:08:01] <robertjpayne> basically have rest API where the client can send up a "where" param that gets put into the query
[09:08:20] <robertjpayne> but im not doing any check on that
[09:08:33] <Nodex> be careful with that
[09:24:23] <robertjpayne> Looks like Mongoid in ruby doesn't do any sort of checks or limitations so what I'm doing I believe is prone to a javascript function pass which could be used to DoS the db
[09:26:21] <kali> robertjpayne: well, you'll need your web script to allow quite complex modification of the query document for that
[09:29:55] <robertjpayne> kali: if javascript is disabled though then the risk is (mostly) gone no? the OP_CODE will always be a read so I can't see a risk as the query would be broken
[09:32:40] <kali> robertjpayne: same if you make sure nothing allow to build something like: <foo> : { $where : <bar> }
[09:33:11] <kali> even in that case, i think the db object is now unaccessible from the $where context (since 2.4, i iirc) so the risk is non existent
[09:34:19] <robertjpayne> kali: sounds good, yea my user input is validated as a valid JSON structure and goes directly to the driver as such ( not a string ), the values inside the JSON can be strings though and that's where I'm looking for vulnerabilities at the moment.
[09:36:54] <bin> i m left with arbiter and secondary
[09:37:01] <bin> and there is no election..
[09:37:02] <bin> why's that
[09:37:13] <kali> bin: show us a rs.conf() and a rs.status()
[09:38:33] <bin> wait
[09:38:59] <bin> http://paste.debian.net/70475/
[09:39:31] <kali> 3 arbiters ?!
[09:40:08] <kali> 5 nodes in your cluster, 3 are dead
[09:40:14] <kali> you get no majority
[09:40:17] <bin> well
[09:40:30] <bin> aaaaaaaaaah
[09:40:31] <bin> fuck yeah
[09:40:35] <bin> i got only 2 votes
[09:40:45] <bin> well i want to have 2 nodes only
[09:41:27] <kali> why on earth have you put 3 arbiters ? and on the same host ?
[09:43:43] <bin> because i just need 2 nodes
[09:43:45] <bin> and if one is dead
[09:43:50] <bin> there is only 1 member left
[09:43:55] <bin> so it cannot be elected
[09:44:03] <kali> you need the arbiter to be on a independent system
[09:44:09] <bin> well i dont have a machine LD
[09:44:11] <bin> i know that
[09:44:12] <kali> and to be only one
[09:44:14] <bin> but i dont have extra machine
[09:44:28] <kali> well, in that case, you can't have automatic failover
[09:44:55] <joannac> you could have a data node on one, and data + arbiter on the other
[09:45:06] <bin> wlel
[09:45:08] <joannac> and then if the first one fails, you're still okay
[09:45:09] <bin> joannac:
[09:45:15] <joannac> and if the second one fails you're screwed
[09:45:18] <bin> well if the one with data and arbiter failes
[09:45:22] <bin> we are gone
[09:45:22] <bin> :D
[09:45:26] <bin> exactly
[09:45:37] <joannac> put an arbiter in AWS. it probably fits within the free tier, even?
[09:47:13] <kali> no reason to put an arbiter on the same system as a data node, just boost the vote weight of the node to 2
[09:47:49] <kali> it will have the same effect, if the other system fails, you're kinda ok, if the one with the two vote fails, you're down
[09:48:55] <kali> but yeah, better shove an arbiter on a free instance somewhere, or piggyback it on any system anywhere
[09:50:14] <bin> joannac: can you tell me more about AWS
[09:50:37] <kali> bin: http://aws.amazon.com/ec2/
[09:54:43] <bin> highly appreciated guys
[10:24:32] <joannac> You're right kali, i didn't think that through >.< my bad
[11:16:07] <tiller> Huum, I'm sorry for the foolish question I'm about to ask, but on : https://github.com/ansible/ansible-examples/tree/master/mongodb
[11:16:20] <tiller> They have this architecture: https://github.com/ansible/ansible-examples/raw/master/mongodb//images/site.png
[11:17:16] <tiller> And I'm not quite sure to understand how it works. Because they use only 3 server. How can there be 3 shards and 3 replicates? 3 mongod are running on each server or I just misunderstood how it works?
[11:32:02] <tg2> 3 shard + 3 replica looks like
[11:32:22] <tg2> 3 mongod per server yes
[11:32:44] <tiller> m'okay
[11:32:55] <tiller> is it a "good idea" to have multiple mongod running on the same server?
[11:33:22] <Derick> no
[11:33:27] <Derick> not for production
[11:33:30] <Derick> for playing around with, sure
[11:34:04] <tiller> ok. Thanks guys :)
[11:34:15] <Derick> tiller: that image you linked to, does not use 3 servers.
[11:34:45] <Derick> but if it does, it's a bad idea
[11:35:10] <tiller> In theory it needs at least 9/10 servers, right?
[11:35:42] <Derick> yeah, 9 should do
[11:36:08] <Derick> main thing is that you can't have more than one data carrying node on each machine
[11:37:40] <tiller> Well, I think I need to deploy something like this to really begin to understand everything!
[11:41:56] <Styles> are there any limits on gridfs?
[11:42:05] <Styles> total size*
[11:51:59] <kali> Styles: no practical limit
[13:55:30] <tiller> Runnig configuration servers & mongos on a server where a mongod is running is fine, right?
[13:55:44] <tiller> (in production)
[14:02:28] <cheeser> fine-ish, i suppose
[14:02:49] <cheeser> if that machine ever dies a lot goes down with it. but i think that's more or less how my last gig was set up.
[14:03:35] <tiller> ok, so for a first one, it's not so bad. Thanks
[14:16:57] <tomasso> is there any form generator for a mongo document? such as rails scaffolding?
[14:18:31] <Nodex> eh?
[14:43:05] <kali> Nodex: be kind, you frighten them
[14:52:04] <Nodex> :P
[15:40:55] <dandre> hello,
[15:41:32] <dandre> I have a collection where documents are like this:
[15:41:32] <dandre> {foo:{key1:{name:"name1"},key2:{name:"name2"},...}}
[15:42:39] <dandre> Is there anyway to get all document that where foo.*.nane matches some pattern?
[15:43:26] <Nodex> no
[15:43:28] <Derick> dandre: no, you should fix yourschema as "key1" is not really a key in your case, but a value
[15:43:44] <dandre> if foo were an array, $elemMatch could be used
[15:43:50] <Derick> change it to: {foo: [ { k: key1, name: "name1" }, ....
[15:44:08] <Derick> then you can just do foo.name = something
[15:44:30] <dandre> ok
[15:46:50] <dandre> now I must update some of the subdocuments in foo. Can I do that in one run or must I send on update per subdocument?
[15:58:36] <dandre> In my case I must update (or insert) some subdocuments identified by a key i foo. These updates come from an array of documents. Can I update all those subdocument in one run or must I do one update request for each subdocument?
[16:01:23] <leifw> can someone help explain to me what changed about optimes between 2.2 and 2.4?
[16:02:27] <leifw> somewhere we gained precision to milliseconds for the t part, but I'm having trouble understanding which formats did change and which did not, and what really happened to the javascript representation of optime/Timestamp types
[16:14:16] <ajph> hey. does anyone have any idea how to batch insert with the mgo Golang driver?
[16:20:06] <pwelch> hey everyone. does anyone know of a way to see how long a node in a cluster has been Primary without parsing through the logs?
[16:20:48] <pwelch> I know of rs.status() but dont see anything about how long that node has been the primary
[16:46:45] <rekibnikufesin> pwelch: Not aware of anything other than logs that would have that
[16:46:57] <pwelch> rekibnikufesin: ok, thanks
[16:55:57] <hanser> i have a collection that stores documents as like http://www.hastebin.com/xayujagiho.apache, i want to update activator field (that is an array object), i used something like " > db.data.update( { key: "DEF", 'activator.cpuid': '3' }, { $set: { activator: { os: 'osx' } } } ) " this override whole activator object i would like to keep activator's other fileds too, any help ?
[17:01:48] <astropirate> Friends, how do I do a case insensative searches on fields
[17:01:56] <astropirate> are regular expressions the only way?
[17:04:52] <astropirate> friends, why have thou forsaken me?
[17:05:06] <rekibnikufesin> astropirate: yup- regex
[17:06:44] <hanser> answering my own question update statement should be : "db.data.update( { key: "DEF", 'activator.cpuid': "3" }, { $set: { 'activator.$.os': 'osx' } } )"
[17:16:53] <rekibnikufesin> Dev drops a sharded, tagged collection last night. Recreates, but doesn't shard (tags still exist). Starts getting errors accessing collection: stale config. I then dropped the collection, removed tagging. Recreated, re-sharded, re-tagged- all is cool until this morning. Now all mongos routers are getting "too many retries of stale version info" when querying that collection
[17:17:08] <rekibnikufesin> ^^ Any thoughts?
[17:19:34] <dandre> I have this document: {foo:[{name:"name1"},{name:"name2"},{name:"name3"}...]}
[17:19:34] <dandre> I want to remove from foo all subdocuments where name is either name1 or name2. How can I do that?
[17:21:44] <stevefink> What timezone are DateTime objects created in by default? UTC? local machine time?
[17:44:10] <dandre> I think it is #milliseconds from epoch
[18:19:20] <bmw0679> What are the best practices for fast geospatial queries?
[18:40:15] <qswz> if mongo 2D feature is not fast enough for you
[18:40:25] <qswz> you could do something with http://en.wikipedia.org/wiki/Quadtree
[18:41:29] <qswz> it basically puts 2d coordinates in a 1D through a tree
[18:42:16] <includex> hi guys, little question: I've build a replicaset (primary+secondary+arbiter) setted auth and keyfile and through rs.status it looks working: http://pastie.org/8547962 but I can't auth and run rs.status in the arbiter. I can't even auth on it. any tip?
[18:48:44] <encaputxat> hey
[18:50:13] <encaputxat> i have the same problem of https://jira.mongodb.org/browse/NODE-104
[18:53:13] <includex> "This is convenient, since you CAN'T log in to the arbiter ... it has no admin database to hold the system.users collection" humm ok :)
[20:28:07] <nicholi> quick question, to modify a user's roles...do you operate directly on db.system.users array ?
[20:36:49] <dgaffney> So, uh, I get this when I try out map-reduce….
[20:36:50] <dgaffney> https://gist.github.com/DGaffney/4ffa14b5e91d85721852
[20:37:02] <dgaffney> Any obvious reasons for this total MR fail?
[20:49:40] <joannac> dgaffney: "MongoDB will not call the reduce function for a key that has only a single value. The values argument is an array whose elements are the value objects that are .mapped. to the key."
[22:07:18] <dgaffney> joannac: can you provide an example that is that simple that actually runs?
[22:11:01] <dgaffney> ahh, I think I get this now. NVM!
[22:11:43] <joannac> dgaffney: damn, i just pasted :p
[22:11:46] <joannac> dgaffney: http://pastebin.com/LcXqBJeV
[22:11:50] <joannac> but glad you got it
[22:12:03] <Fishbowne> what is best way to ingterface website to mongodb ?
[22:12:08] <dgaffney> if the key is completely unique to the collection reduce won't get called?
[22:12:10] <Fishbowne> javascript ?
[22:12:12] <dgaffney> this is correct?
[22:12:37] <joannac> dgaffney: correct, if there's only one value for the id, it doesn't make sense to "reduce"
[22:12:50] <joannac> well, it does in your case. but in general
[22:14:01] <Fishbowne> what is best way to interface HTML to mongodb ? any recommends ?
[22:14:21] <Fishbowne> javascript ?
[23:13:43] <quuxman> Is there a way to group updates so they only perform a single index operation? I'm trying to update just a couple thousand records, but I'm updating a massive multi_key index, so it's taking an unreasonably long time
[23:51:20] <quuxman> is there a mechanism for bulk updates?
[23:51:42] <quuxman> or to allow for a certain index to not update?