PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 6th of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:25:26] <dman777_alter> hi, anyone use mongoose in here?
[01:25:31] <dman777_alter> http://bpaste.net/show/4I7LDFQa3kKxbcP5kzrn/ I'm new to mongoose and mongodb. I have a static field metrics : [ { duration : "15m"...when writing the function that creates the document should I include it also as I have in the paste?
[02:17:07] <dman777_alter> http://bpaste.net/show/kAcBu9SK1kDZDxM1jku3/ ok, my first attempt to save a document. but it's not saving :(
[02:18:02] <joannac> error message?
[02:21:27] <dman777_alter> joannac: no...and I placed a console.log("hi"); message inside of cpu.save() which doesn't show anything ether.
[02:22:36] <dman777_alter> I did verify the cpu document is getting created successfully.
[02:22:52] <joannac> ...okay
[02:23:08] <joannac> so the document is saving?
[02:24:49] <dman777_alter> joannac: no, because none of the cpu.save() is not running. I also went into mongo shell and db.cpu.find() didn't return anything
[02:25:19] <dman777_alter> I don't know why cpu.save() will not run
[02:28:35] <dman777_alter> it's cool...going to take a break. thanks though
[04:08:57] <Cache_Money> I'm on a MBP. I usually start the Mongo daemon using $ mongod But now I'm getting this error: ERROR: dbpath (/data/db) does not exist.
[04:09:17] <Cache_Money> how would I find out where my mongodb data directory lives?
[04:11:44] <cheeser> did you install using homebrew?
[04:12:05] <joannac> if you know a database name, you could try look for "dbname.ns"?
[04:25:23] <Cache_Money> I ended up just creating the directory
[04:25:44] <Cache_Money> I know I used to have other collections in my mongodb before but oh well...
[07:26:01] <gancl> Hi! How to update or insert subdocument's array? http://stackoverflow.com/questions/24075910/mongoose-cant-update-or-insert-subdocuments-array
[07:27:22] <narutimateum> haha..same as my question
[07:51:50] <narutimateum> oh well.. i guess i have to use push and pull...
[07:52:27] <narutimateum> 2nd question... do i put reference in subdocument like this "_id" : ObjectId("538ef097576f1f7d505c2ebc"), or "_id" :"538ef097576f1f7d505c2ebc",
[07:55:51] <gancl> 1st one
[08:06:24] <narutimateum> so i have a problem where i dont know actually i get this objectId type to register into mongo
[08:06:36] <narutimateum> im using mongodb eloquent query builder kind of stuff
[09:21:53] <djlee> Would i be right assuming that if i wanted to scale mongodb horizontally on AWS EC2, i would have to have my application code write to the master mongo/ec2, and place the slaves behind a load balancer and have my application do reads from the load balancer
[09:29:54] <kali> djlee: this behaviour is more or less built in the mongodb clients
[09:35:07] <djlee> kali: Ah, just noticed primary and secondary replica set members in the PHP MongoClient class, must be what you are referring too. Seems to only relate to reading rather than write, but i'll ask google, as that must be the stuff i need to look at
[09:35:55] <kali> djlee: write always go the primary.
[09:36:45] <kali> djlee: you should not have to interact with these classes anyway, but use the ReadPreferences system instead: you can specify at the client scope, database, collection or even query scope where you want to read to go
[09:37:17] <kali> djlee: and DONT try putting a replica set behind a tcp load balancer. it will not work
[09:38:32] <_boot> if I start updating shards/replica set members to 2.6 from 2.4.9, is there likely to be any problems?
[09:39:55] <kali> _boot: http://docs.mongodb.org/manual/release-notes/2.6-upgrade/
[09:40:07] <_boot> ty
[09:40:18] <djlee> kali: ah right, load balancer idea is out the window then. No doubt i need some sort of sentinel/manager, which monitors slaves and i can use to query and return a suitable slave IP (dont worry, i'll happily google it, clearly got a lot to learn quickly :P)
[09:40:59] <kali> djlee: that's built-in
[09:41:40] <djlee> kali: i'll read the mongodb docs on replication e.c.t. clearly they've implemented most of the stuff i thought id have to manually manage!
[09:41:47] <kali> djlee: the only thing you need is some kind of out-of-band monitoring to allow you to know that your replica is degraded
[09:42:04] <kali> djlee: mongodb is 21st century database :)
[09:42:07] <kali> +a
[09:43:18] <djlee> kali: yeah im still new to the nosql world. Spent my entire career on single MySQL hosts. Now i've got an api, mongo and redis instance that all needs to scale well, in the cloud, by next friday, and have been load tested (god i love client deadlines)
[09:43:41] <djlee> kali: i'll grab a coffee and get reading :D thanks for your help!
[09:44:48] <Nodex> lol
[09:44:59] <kali> yeah. i can't see how this can fail :)
[09:45:54] <kali> Nodex: hey, wanna have a look at my toy project ? https://github.com/kali/extropy
[09:45:59] <Nodex> yeh man
[09:46:18] <Nodex> you got a typo in your readme
[09:46:23] <Nodex> want me to raise an issue?
[09:46:45] <Nodex> developpers <--- 2 "p's"
[09:46:48] <Nodex> first line
[09:47:18] <kali> Nodex: i'll fix it
[09:47:26] <kali> Nodex: this is a "frenchism"
[09:47:39] <Nodex> containts <--- remove the last "t"
[09:48:36] <kali> and the "s" too. thanks.
[09:48:49] <Nodex> you can use the s if you need ot
[09:48:51] <Nodex> to*
[09:50:59] <Nodex> pretty cool, aside the java requirement
[09:51:33] <kali> Nodex: you mean "java" or "java 8" ?
[09:51:46] <Nodex> strangely I employ a similar tactic for Mongo to solr mappings and de-normalisation
[09:52:29] <Nodex> all my mappings are simply json objects that are run through a parser that cleans / maps / parses things to build an eventual object for solr/mongo
[09:53:35] <Nodex> I love the license notes haha
[09:53:48] <Nodex> This work is free. You can redistribute it and/or modify it under the terms of the Do What The Fuck You Want To Public License, Version 2, as published by Sam Hocevar. See the COPYING file for more details.
[09:53:49] <Nodex> lmfao
[09:53:50] <kali> mmm and you plug that at which level ? just between the mongo client and dao ?
[09:54:15] <Nodex> I run it as an API, I simply throw an object to it and get back two objects - one for mongo, one for solr
[09:54:22] <kali> ok
[09:54:45] <Nodex> all functionality and parsing can be defined in json then
[10:12:36] <ajph> Nodex: we spoke before about business type stuff. mind if i PM?
[10:21:53] <wenxueliu> I make a shared library which call by lua. but get ./client.so: undefined symbol: mongoc_collection_destroy
[10:22:01] <wenxueliu> can anyone help?
[10:58:30] <rainerfrey> I get errors on the mms support form, so I ask here: what's up with MMS UI requesting to set up two-factor auth? I used to be able to access my MMS account without, but all links are redirecting to the setup form ...
[11:05:11] <Nodex> ajph : sure
[12:27:08] <gancl> Hi! How to let mongoose update or insert subdocument's array? http://stackoverflow.com/questions/24075910/mongoose-cant-update-or-insert-subdocuments-array
[12:47:20] <Nodex> sub documents can be accessed with dot notation
[13:07:44] <gancl> nemux: I know it. But it's in the Array, it returns all items in a array, I just want to modify only one item
[13:26:46] <dhm_edx> gridfs and I have been fighting for 2 days. I issue queries which take hours but show almost no activity other than locks.
[13:27:11] <dhm_edx> example query: var extra = db.fs.chunks.find({n: {$gt: 0}}); extra.length();
[13:28:26] <dhm_edx> what I'm trying to accomplish is that we changed the key order in our subdocs we use for files_id and I'm trying to find the chunks with the intermittentlly screwed up order and fix their files_id
[13:29:14] <kali> dhm_edx: an index on "n" should help this query. also try using count({..}) instead of find({ ...}).length
[13:29:59] <dhm_edx> i'm going to iterate over extra. I just wanted to see how many records I'm dealing w/ as they tell my users how many images are broken in current courses
[13:30:01] <kali> dhm_edx: but don't expect a miracle. gridfs schema was not designed for that
[13:30:07] <dhm_edx> it's just a one time query
[13:35:58] <dhm_edx> my mongo monitoring console shows ~ 100% lock ratio but only about 3 command/sec and about 1 write/sec (I have another query updating 284 of the entries)
[13:36:36] <dhm_edx> that query is : for (var i = 0; i < found.length; i++) {
[13:36:36] <dhm_edx> var foundEle = found[i];
[13:36:36] <dhm_edx> db.fs.chunks.update(foundEle[1], {$set: {files_id: foundEle[0]}});
[13:36:36] <dhm_edx> }
[13:37:10] <dsirijus> is there any convenient os x (gui) app to connect to remote mongodb?
[13:37:26] <dsirijus> i got spoiled by Sequel Pro on os x for sql ones
[13:38:55] <dhm_edx> i haven't found one
[13:44:27] <kali> dsirijus: mongohub
[13:44:56] <kali> dsirijus: this fork: https://github.com/fotonauts/MongoHub-Mac/tree/master
[13:56:54] <tscanausa> is anyone here going to mongodb world?
[13:57:46] <Derick> I can't make it
[14:06:54] <cheeser> i'm going!
[15:36:47] <dgarstang> Can anyone recommend a way to deploy mongodb cluster automatically? The chef cookbook for it kinda sucks
[15:45:37] <_dougg> Can anyone recommend a way to deploy mongodb cluster automatically? The chef cookbook for it kinda sucks
[15:49:25] <mischat> a question folk re: 2.6.2-rc0
[15:49:45] <_dougg> mischat: I think everyone is eating breakfast. :-\
[15:50:06] <mischat> it is mid afternoon for me, but noted
[15:50:18] <mischat> does anyone have a feel for how long releases stay in rc before being shipped ?
[15:50:23] <mischat> on average
[15:50:30] <_dougg> it's breakfast time at 10gen HQ if that counts
[15:50:38] <mischat> i know it is a rubbish question, and I apologise
[15:50:57] <mischat> yeah sure, I will loiter for a while, happy to get an answer later on
[15:51:10] <mischat> happy breakfast!
[15:51:34] <_dougg> i'm still waiting on finding out if there's a way to easily automate deployment of a mongo cluster.
[16:02:58] <saml> "errmsg" : "exception: can't use $not with $regex, use BSON regex type instead",
[16:03:05] <saml> db.urls.find({url:{$not:{$regex:/\/fashion\/fashionshows\/season/}}, type:'fashion_archive'})
[16:03:25] <saml> i want to find docs whose .url does not match regex but whose .type is fashion_archive
[16:04:16] <_dougg> i'm still waiting on finding out if there's a way to easily automate deployment of a mongo cluster.
[16:04:32] <saml> db.urls.find({url:{$not:/\/fashion\/fashionshows\/season/}, type:'fashion_archive'})
[16:12:32] <jigax> hello
[16:12:48] <jigax> cane someone help me
[16:12:59] <jigax> i have the following query
[16:13:00] <jigax> collection.aggregate({ $match: { techId: req.user.techId } },{$group : {_id: "$Order Class", data: {$sum: 1}}}
[16:19:20] <_dougg> i'm still waiting on finding out if there's a way to easily automate deployment of a mongo cluster.
[16:26:45] <federated_life1> _dougg: its pretty easy already , there should be an announcement at mongo world with some automation tools similar to MMS
[16:40:51] <achiang> hello, i have successfully configured a collection to have a 2dsphere index, and geospatial queries such as geoNear already work. question is -- is there a "normal" search to see if the collection already contains a coordinate pair?
[16:41:31] <achiang> use case is preventing duplicate data from being inserted, so search collection for the coordinates first, if they exist, then do not insert
[16:44:32] <cheeser> can't you just upsert it?
[16:44:45] <cheeser> search then insert is vulnerable to race conditions.
[16:47:54] <achiang> ah, ok
[16:47:56] <heewa> another aproach is to insert, check for dupe & remove yours. easier in some cases, if you don’t mind having dupes for a short period. (like in complex, multi-document, multi-collection scenarios)
[16:47:57] <achiang> i can try that
[16:48:02] <NodeJS_> Is it possible to make documents in specific order except array?
[16:48:19] <cheeser> only when you query
[16:48:27] <achiang> thanks cheeser, heewa
[16:48:34] <cheeser> heewa: also race conditiony
[16:48:53] <cheeser> if upsert doesn't work, define a unique index/constraint.
[16:49:20] <dhm_edx> @NodeJS_ all docs and subdocs are ordered; however, not all language impls provide access to the key order
[16:50:55] <NodeJS_> dhm_edx: what do mean?
[16:52:21] <NodeJS_> dhm_edx: Let's say I have the following documents in collection: {_id: ObjectId(1), title: ''}, {_id: ObjectId(2), title: ''}, {_id: ObjectId(3), title: ''}, how can I get it in specific order?
[16:52:58] <_dougg> Is there a good way to deploy an entire cluster? The community chef cookbook designed for the task is pretty flaky. Enterprise app... no way to automagically deploy?
[16:53:29] <dhm_edx> ah, by definition, order of persistence for separate objects does not guarantee order of retrieval; however, since ObjectId encodes the time stamp, you can sort by the implied time stamp I believe
[16:54:48] <NodeJS_> dhm_edx: in my case I need to insert new documents in specific position
[16:57:51] <NodeJS_> Any suggestions?
[17:01:50] <moleWork> _dougg, i'mn currently working on this problem myself
[17:02:32] <moleWork> this cookbook is based on old community cookbook i thin
[17:02:57] <moleWork> but i'm spining up mongodb replicaset's with it on opsworks http://netinlet.com/blog/2014/01/18/setting-up-a-mongodb-replicaset-with-aws-opsworks/
[17:05:34] <moleWork> i also have a similar vagrant setup so i can spin up dev clusters that closely match production cluster on aws
[17:09:20] <og01_> Hi, i've come across something that i don't know how to do, i've pastebined an example: http://pastebin.com/BTQPgBvB
[17:09:20] <_dougg> moleWork: you aren't having problems with it? I can reproduce bring up 2 replicas in a shard, but bringing up the third... chef fails
[17:09:47] <_dougg> moleWork: i have to add the third node manually in the primary replica, and then go back and run chef
[17:09:57] <moleWork> i don't have any shards sorry.... just replicaset.... i do have a 3 node replicaset no problem though
[17:10:02] <moleWork> and i can just add a node and it will join
[17:10:04] <og01_> I want to aggregate on the max time of one field while pulling out the another field from that max
[17:10:37] <_dougg> yah *sigh* sharding makes life complicated
[17:10:48] <moleWork> well i'd guess it's a timing thing
[17:10:53] <og01_> please take a look at my paste, I might not be explaining my question well, i think the paste makes it clearer
[17:10:55] <_dougg> don't think so
[17:10:58] <moleWork> i'm struggling with chef right now
[17:11:07] <_dougg> although the latest incarnation of the issue is "
[17:11:19] <_dougg> oops "not authorized for query on local.system.replset" *sigh*
[17:11:32] <moleWork> heh
[17:11:43] <moleWork> that sounds solvable
[17:12:07] <_dougg> i think it's because I've enabled authentication
[17:12:24] <_dougg> friggin hell. sharding + authentication + SSL + automation = NIGHTMARE
[17:13:10] <tscanausa> _dougg: I actually rebuilt the chef cookbooks for my needs for both replica sets and shards.
[17:13:12] <_dougg> wait, I haven't enabled auth yet
[17:13:31] <_dougg> tscanausaL don't suppose you contributed that back?
[17:14:04] <moleWork> there's a million forks of these mongodb cookbooks
[17:14:20] <_dougg> moleWork: none contributed back that I've seen
[17:14:21] <tscanausa> _dougg: unfortunately no. i just just started from scratch.
[17:14:25] <moleWork> and chef seems pretty complicated soo many subtlties to timing and how it's works
[17:15:00] <_dougg> well I'm open to alternatives. Mongo is supposed to be an enterrprise app. An enterprise app should have a way to automate deployment of a cluster
[17:15:15] <tscanausa> it does you pay mongodb
[17:15:16] <moleWork> well there's the main edelight one... then the parsops one https://github.com/ParsePlatform/Ops/tree/master/chef/cookbooks/mongodb
[17:15:20] <moleWork> and the other one i pasted
[17:15:37] <_dougg> moleWork: the Edelight one is the one I started with. I've forked it to fix issues, but it still has more
[17:15:38] <moleWork> my problem is... i don't understand chef well enough
[17:15:50] <_dougg> moleWork: and ruby makes my eyes bleed
[17:15:56] <moleWork> i'm in same boat :(
[17:16:46] <_dougg> moleWork: Does the parsops one work?
[17:17:08] <moleWork> it's based on an old version of the edelight one and has it's own issues and compatbility problems
[17:17:09] <_dougg> parsops one... not modified in a year or so. Not gonna touch it
[17:17:14] <_dougg> yah
[17:17:28] <moleWork> i'm in middle of writing my own custom one for opsworks
[17:17:31] <moleWork> and it's a nightmare
[17:17:46] <_dougg> the installation of mongo is easy... it's the orchestration and config that's a pain
[17:17:51] <moleWork> i don't understand why chef does what it does
[17:18:02] <_dougg> better than puppet...
[17:18:15] <moleWork> opsworks makes that a bit easy since you have access to all the instances within the built in opsworks attributes
[17:18:21] <_dougg> if there was an ansible way.... I'd switch to that right now
[17:18:22] <moleWork> and doesn't use chef-server which is annoying
[17:18:40] <moleWork> i want to use docker/ansible but i don't want to customize my whole aws stack to do it
[17:18:45] <moleWork> i just want to use some magic stuff
[17:18:57] <_dougg> docker might be ok if I had months, not weeks
[17:19:30] <moleWork> i'm trying to automate rolling node replacement :(
[17:19:33] <_dougg> i really think we'll just have to deploy with chef and configure by hand
[17:20:01] <moleWork> really need something other than ruby/chef to do this stuff, it's ugly
[17:20:30] <_dougg> giant cloudformation template?
[17:20:38] <moleWork> has the same problems
[17:20:56] <moleWork> still needs to autodiscover node and reconfigure the clusters
[17:20:59] <moleWork> when you replace nodes
[17:21:13] <moleWork> + add nodes
[17:25:16] <_dougg> I think the problem with the Edelight cookbook is that it works with 2 nodes because the first one has nothing to do, the second one becomes the primary, and the third one... well I just think they never tested that
[17:27:20] <moleWork> yeah tough to say
[17:27:35] <og01_> Im sure what im trying to do must be possible, can someone take a look at my pastebin example? http://pastebin.com/BTQPgBvB
[17:27:50] <moleWork> https://github.com/edelight/chef-mongodb/blob/master/definitions/mongodb.rb would try to debug whether or not your query on line 204-210 brings you back all three nodes
[17:28:12] <og01_> im trying to think of ways to project the data, or use multiple stages in the aggregation pipeline, but i can't work it out
[17:28:25] <_dougg> moleWork: reading the docs on mongodb. even that says you had to add nodes via the primary. That's never going to work with chef.
[17:28:49] <moleWork> yeah the primary has to be up for sure
[17:29:05] <_dougg> moleWork: you add the third node, and bring it up. Chef fails because it's trying to run rs.add on a non primary node. You have to go back to the primary and run the add by hand and then chef will run ok on the third node.
[17:29:24] <_dougg> moleWork: it works on the second node because it makes that the primary
[17:29:40] <_dougg> (because there isn't a primary yet)
[17:31:07] <moleWork> here let me try
[17:31:29] <moleWork> i'm gonna bring 3 node replicaset up from scratch see if it works
[17:31:36] <_dougg> kk...
[17:31:39] <tscanausa> that command will run the other 2. its really not difficult. you just need the master to run the add commands
[17:32:08] <_dougg> tscanausa: the edelight cookbook isn't written that way
[17:33:57] <moleWork> it's ugly code but i'm pretty sure that code in that cookbook i initialially pasted works
[17:34:45] <_dougg> moleWork: it can't
[17:35:16] <tscanausa> this is the ruby code I use to manage replica set https://gist.github.com/anonymous/1910558ada1354d101c9
[17:35:42] <moleWork> https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/libraries/mongodb.rb if you look at line 138 onwards in this file
[17:36:03] <moleWork> err line 185 on
[17:36:12] <moleWork> you can see it connecting to the old members
[17:36:13] <_dougg> i dont think it's a chef code issue. it's an orchestration issue
[17:36:14] <moleWork> to reconfigure the replicaset
[17:36:35] <_dougg> it cant configure the replica set. The second node is the primary, and the third node is not
[17:37:00] <moleWork> well it't not even connecting to a single node
[17:37:01] <_dougg> the third node can't do bupkiss until it's added on the primary
[17:37:09] <moleWork> it's making a connection to the replicaset and reconfiguring it
[17:37:15] <_dougg> ... and then when you run chef on the third node it'shappy
[17:37:20] <moleWork> which will automatically give you the primary node
[17:37:26] <_dougg> moleWork: right, and that fails because it's not primary
[17:37:38] <moleWork> no but it connects to the primary via the ruby driver
[17:37:45] <_dougg> hm
[17:37:54] <moleWork> by connecting to the replicaset you automatically get the primary for writes
[17:38:00] <_dougg> ic
[17:38:04] <dhm_edx> NodeJs_ once again, afaik, position is a nonsensical notion wrt dbs; however, if you have the notion of an order, then just add a field to represent that order and give it the serial # you want.
[17:38:06] <_dougg> something else is up then
[17:38:39] <_dougg> moleWork: the code shows it connecting to localhost tho
[17:38:40] <dhm_edx> if you need to insert between things frequently, then use floats or leave enough room for your upperbound # of inserts
[17:38:51] <moleWork> yeah it has to connect to local host to configure it
[17:38:58] <moleWork> and also connect to the existing replicaset
[17:39:00] <moleWork> to configure that
[17:39:08] <_dougg> moleWork: does connecting to localhost get you the local or the primary?
[17:39:55] <moleWork> i think it depends on how you connect to it
[17:39:57] <moleWork> with what options
[17:40:09] <fschuindt> hey guys, there is a way to set a default mongo config file? I always need to type $ mongod --config-file ~/mongo/mongodb.conf I want just to type mongod then go
[17:40:11] <moleWork> and depends if it's a member or not
[17:40:43] <_dougg> moleWork: it's just calling connection = Mongo::Connection.new("localhost", node['mongodb']['config']['port'], :op_timeout => 5, :slave_ok => true, :ssl => true, :ssl_cert => "/etc/ssl/mongodb-#{node.chef_environment}.pem")
[17:40:55] <_dougg> (i had to add the ssl stuff)
[17:40:57] <moleWork> when you connect to it as a replicaset, even with partial host list i believe it seeds the connection so you get to the primary (i think)
[17:41:21] <_dougg> when I connect with the same options via the cli shell, I get local, not primary
[17:44:40] <moleWork> yeah this code big and ugly with huge else branches but if you read the code starting with " # remove removed members from the replicaset and add the new ones" you can see how it reconfigures the replicaset starting with the old members that already exist
[17:45:51] <_dougg> moleWork: added some more debug. looks like it IS connecting to ... wait...
[17:46:18] <_dougg> oh shit. it's connecting to the first node, which is second, not the primary. WTF?
[17:48:12] <moleWork> if you bring up the nodes with explicit priorities you can then know exactly which one will come up as primary
[17:49:00] <_dougg> moleWork: looks like the cookbook lets you set a priority.
[17:49:20] <moleWork> yup
[17:49:34] <moleWork> i haven't played with that yet cause i ditched that cookbook too because it didn't do what i wanted :(
[17:49:41] <_dougg> but then i have to manage node attributes, which sucks. i wonder how that works if we have to promite a slave to primary too
[17:49:54] <moleWork> i think it's "best practice" to set priorities
[17:49:58] <moleWork> but i'm not 100% sure
[17:50:09] <moleWork> otherwise how do you know which one is primary?
[17:50:17] <dsirijus> thanks, kali
[17:50:31] <tscanausa> you only need priority if you are having replica sets out side of your normal "cluster"
[17:50:34] <moleWork> if you need to do any thing it basically has to be done from primary so setting priorities makes it so you don't have to guess at which one is primary
[17:50:49] <moleWork> there you go
[17:50:51] <fraterlaetus> Hi. Is it possible to use mongodump or a similar utility to dump only the last 24 hours of data? I'm trying to reduce the size of the dataset I use for testing.
[17:51:46] <moleWork> fraterlaetus, mongodump has a query parameter
[17:51:52] <_dougg> ctscanausa: unless your using a broken cookbook
[17:53:25] <fraterlaetus> thanks moleWork
[17:54:54] <tscanausa> kinda a problem with the system design and lack of quality effort as in there are a lot of broken cookbooks
[17:55:54] <moleWork> i think it's lack of chef documentation paired with nasty rubyness
[17:56:13] <tscanausa> what do you mean by chef documentation?
[17:56:31] <moleWork> well i've been on this for about 3 weeks ish
[17:56:37] <moleWork> and i'm having a major chef problem i can't figure out
[17:56:42] <tscanausa> what is that?
[17:56:45] <moleWork> and i don't know how to figure it out
[17:57:00] <tscanausa> that reminds me that I am not in the chef channel.
[17:57:05] <moleWork> https://github.com/opscode-cookbooks/aws/blob/master/resources/ebs_raid.rb this particular resource is causing me problems
[17:57:10] <_dougg> i'm not even sure this shitty cookbook takes the priority into consideration when selecting the primary
[17:57:11] <moleWork> it's like "not working"
[17:57:19] <_dougg> chef documentation is .... rather lacking
[17:57:24] <moleWork> _dougg, priority is a mongodb thing
[17:57:31] <moleWork> mongodb decides who becomes primary
[17:57:35] <moleWork> you can't set it
[17:57:50] <_dougg> cmoleWork: i know but the cookbook seems to do a chef search and then pick the first
[17:58:07] <_dougg> moleWork: docs imply you can
[17:58:09] <fraterlaetus> have you considered using a node attribute like ":is_master"?
[17:58:13] <_dougg> http://docs.mongodb.org/manual/tutorial/force-member-to-be-primary/
[17:58:37] <moleWork> yeah by setting priority or removing all other members and reconfiguring it so that it's the only member
[17:59:00] <fraterlaetus> I'd make a db_master role and a db_subordinate role
[17:59:14] <_dougg> fraterlaetus: edelight cookbook doesn't use is_master
[17:59:25] <fraterlaetus> so extend it a lil
[17:59:26] <moleWork> i don't think is_master is settable
[17:59:42] <fraterlaetus> anything is settable in a node
[17:59:44] <_dougg> fraterlaetus i'm trying to keep changes to a minumum. ruby makes my eyes bleed
[17:59:54] <moleWork> fraterlaetus, not in mongodb though.
[17:59:54] <fraterlaetus> fair enough
[18:00:23] <_dougg> fraterlaetus you can set it but the cookbook has to read it
[18:00:40] <fraterlaetus> you'd just need to add it to the config file
[18:01:02] <_dougg> fraterlaetus: and modify the cookbook
[18:01:29] <fraterlaetus> in the cookbook
[18:01:37] <fraterlaetus> you'd add a dynamic entry to the config
[18:01:51] <moleWork> pretty sure isMaster is just a read only attribute
[18:02:13] <moleWork> it's a result of the instance being voted into primary by the cluster
[18:02:20] <fraterlaetus> I'm saying make an attibute
[18:02:22] <fraterlaetus> a new one
[18:02:36] <fraterlaetus> and define it in your .erb template
[18:02:39] <moleWork> i don't think mongodb allows you set the master other than implicitly through priorities
[18:03:11] <moleWork> all the nodes get together and vote who's gonna be primary
[18:04:04] <fraterlaetus> then, you'd use it like <%= node['is_master'] %>"
[18:04:10] <fraterlaetus> ahh
[18:04:50] <moleWork> _dougg, my 3 node replicaset came up no problem
[18:04:55] <_dougg> i thought the priority let you set the primary
[18:05:02] <moleWork> it does
[18:05:10] <_dougg> moleWork: *sigh*
[18:05:22] <_dougg> my third node fails every time
[18:05:39] <moleWork> i feel your pain
[18:05:46] <moleWork> i don't want to be working on this at all
[18:05:51] <moleWork> but can't get past all the issues
[18:05:57] <moleWork> chef is a pain in my behind
[18:06:04] <moleWork> and as much as i read about it
[18:06:06] <moleWork> it doesn't help
[18:06:15] <_dougg> even if it works, this makes me nervous. I don't want to manage a prod db cluster with a flaky chef cookbook
[18:06:19] <moleWork> documention is awful
[18:06:44] <moleWork> yeah that's why i'm writing mine from scratch, there is too much of a mess of horrible ruby code in this stuff
[18:07:08] <moleWork> but then you need to understand chef... which is a feet in itself
[18:07:10] <_dougg> python would be nice
[18:07:24] <_dougg> i understand chef... I've been knee deep in it for months. it's ruby that I hate
[18:08:11] <moleWork> i don't hate ruby i just wish it didn't exist and that this automation stuff wasn't such a mess
[18:08:25] <moleWork> i'd rather do it in ruby than bash
[18:08:36] <moleWork> chef is my main problem
[18:10:45] <tscanausa> I think your problem with chef is you are using the features that were added months to year ago that no really uses
[18:11:12] <_dougg> tscanausa: such as?
[18:11:46] <tscanausa> moleWork: said he was use the ebs raid resource. i have never heard of anyone else using it
[18:12:11] <_dougg> nor i
[18:12:23] <moleWork> tscanausa, well i have this raid stuff working on raw ec2
[18:12:35] <moleWork> but when i slightly modified the code it broke
[18:12:45] <moleWork> to work on opsworks
[18:12:52] <moleWork> it ruined my life i guess
[18:13:08] <moleWork> my problem is "i can't figure out why it doesn't work"
[18:13:10] <yutong> do aggregations block?
[18:13:19] <tscanausa> ya. that could be another thing. opscode isnt exactly chef so the code resources may or maynot work there.
[18:13:26] <moleWork> right
[18:13:32] <tscanausa> yutong: I dont think so if they do I am screwed
[18:13:41] <yutong> lol
[18:13:51] <yutong> i'm doing an aggregation on a collection with a 10 million documents
[18:13:58] <yutong> if they block i am also royally screwed
[18:13:59] <tscanausa> moleWork: so that is not a chef problem that is an amazon / opscode problem.
[18:14:20] <tscanausa> opswork not opscode.
[18:14:34] <moleWork> well that's what it hink it is
[18:15:00] <moleWork> trying to figure out why this might not work.. or if it can work
[18:15:02] <moleWork> https://github.com/opscode-cookbooks/aws/blob/master/providers/ebs_volume.rb
[18:15:54] <yutong> are aggregations mostly IO bound or compute bound?
[18:16:19] <moleWork> see tscanausa it calls node.set
[18:16:34] <moleWork> which should work cause i only need persistence as long as the recipe is running
[18:16:49] <moleWork> https://github.com/opscode-cookbooks/aws/blob/master/providers/ebs_raid.rb
[18:17:07] <moleWork> but there must be something in here that doesn't work with chef-solo
[18:17:21] <tscanausa> yutong: um depends on dataset size. it basically does a table scan.
[18:18:12] <moleWork> and it doesn't look like this is reusable https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.4/ebs/recipes/raids.rb
[18:18:42] <moleWork> it's only for the built in recipies
[18:18:54] <tscanausa> moleWork: i think set is not chef solo capabile as that should update the server.
[18:19:08] <moleWork> well, it still works within that "run"
[18:19:20] <moleWork> there's something else in here that doesn't work (i think)
[18:20:13] <moleWork> but you are pretty much right on this issue... "this cookbook does not work on opsworks"
[18:20:19] <moleWork> but for the life of me i can't figure out why
[18:20:38] <tscanausa> you can ask chef. or open a ticket in jura
[18:20:40] <tscanausa> jira*
[18:20:50] <moleWork> could
[18:21:07] <moleWork> except i really don't understand enough to probably ask the right questions
[18:21:45] <tscanausa> "Does aws/recipes/ebs_raid" work on opsworks?
[18:22:37] <moleWork> where are you looking for that?
[18:22:57] <tscanausa> what do you mean?
[18:23:08] <moleWork> https://github.com/opscode-cookbooks/aws/tree/master/recipes
[18:23:17] <moleWork> i don't see the file you referenced
[18:23:33] <tscanausa> sorry swap recipe for provider
[18:23:39] <moleWork> right
[18:23:55] <moleWork> no, that's what i'm doing
[18:24:00] <moleWork> but it doesn't work
[18:24:01] <moleWork> keep getting
[18:24:25] <moleWork> [2014-06-06T16:17:19+00:00] ERROR: aws_ebs_volume[sdi1] (/var/lib/aws/opsworks/cache/cookbooks/aws/providers/ebs_raid.rb line 339) had an error: RuntimeError: Volume no longer exists
[18:24:25] <tscanausa> I make the statement that a lot of the mongo cookbooks out there are crazy complicated because of mongodb's system architecture.
[18:24:36] <moleWork> yeah this has nothing to do with mongodb yet
[18:24:51] <moleWork> just trying to bring up a raid on opsworks through chef config and not through the web ui
[18:25:00] <tscanausa> moleWork: I am saying go into the #chef irc and ask "Does aws/provider/ebs_raid work on opsworks?"
[18:25:07] <moleWork> oh ic
[18:25:25] <tscanausa> there might even be an opscode channel
[18:25:34] <tscanausa> opsworks channel
[18:25:41] <tscanausa> damn opscode vs opsworks
[18:26:31] <moleWork> lol
[18:26:33] <moleWork> yeah it's a mess
[18:26:47] <moleWork> looks so promising, but when you get into it... it's a nightmare
[18:27:22] <moleWork> ah well, thx
[18:27:35] <moleWork> i think i'm just gonna give up on automating the raid on opsworks with this recipe
[18:27:40] <moleWork> and reevaluate my place in life
[18:28:25] <moleWork> it'd be nice to understand why it doesn't work though cause, after staring at this for almost 3 days now i can't figure it out
[18:28:36] <tscanausa> I know that feeling right now aws php sdk is not working and I cant find out why
[18:28:53] <tscanausa> did you ask #chef?
[18:39:47] <moleWork> yeah it's not ontopic... random convos.. question didn't peak anyone's interest it looks like
[18:40:54] <moleWork> i'll figure it out eventually but i've already wasted too much time trying to save some time
[18:40:59] <moleWork> gonna have to drop it i guess
[18:49:39] <_dougg> moleWork: are you trying to use chef to configure local EBS volumes on boot?
[18:49:54] <moleWork> well on "setup"
[18:50:02] <moleWork> and i have it working with chef-server
[18:50:05] <moleWork> works great
[18:50:09] <moleWork> but not on opsworks
[18:51:56] <moleWork> something in those cookbooks must rely on some "passing of state" that simply can't work with chef-solo
[18:52:18] <moleWork> well, at least that's my hunch as tscanausa basically pointed out
[18:55:01] <moleWork> one thing i actually should of check if the source for the aws cookbook in the cache matches that of the one on the opscode-cookbook repo
[18:56:00] <tscanausa> moleWork: ya that one of the fun things my infrastructure runs on 3 clouds so I have to stay away from the proprietary stuff.
[18:56:20] <_dougg> moleWork: i wouldn't do it that way. unless your careful, chef could blow away your disks. I know from experience. I'd write a script that runs on boot and never runs gain
[18:56:38] <moleWork> yeah, i mean i'm fulling willing to role custom if my boss gives me goahead but i don't want to support something i don't have to if something exists that "just works"
[18:57:16] <moleWork> _dougg, well, i only think "setup" runs once and the cookbook checks to see if the volume already exists
[18:57:39] <moleWork> and technically nodes should be ephermeral so if it blows out my disks... it'd be a good test to see how i handle it
[18:57:50] <_dougg> moleWork: ya sure setup won't execute eveyr time chef runs?
[18:57:56] <_dougg> kk
[18:57:57] <moleWork> yes
[18:58:17] <moleWork> http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html
[18:58:32] <moleWork> it basically does what you say... runs once on first boot
[18:59:11] <moleWork> the configure event is the one that bbugs me cause i'm worried it'll force mongodb primary to drop all connections on a reconfigure in some situations
[18:59:14] <moleWork> but not sure yet
[19:18:55] <Viesti> yay, got the dictionary key replacing up to speed with python
[19:23:45] <Viesti> maybe in an odd way, using reduce-kv for just iteration, and doing mutation while traversing the transient map: http://pastebin.com/EA2yhtcD
[19:30:55] <_dougg> if I can't get mongo automated soon i should start looking for a new job. :(
[19:32:14] <cheeser> luckily a job search *can* be automated.
[19:33:32] <_dougg> cheeser: gee thanks
[19:33:36] <cheeser> :D
[19:33:38] <_dougg> :(
[19:33:48] <cheeser> trying to find that silver lining...
[19:40:14] <modcure> I created a sharded cluster and a database with some collections. i restarted the server and started the sharded cluster the data is gone.. any thoughts?
[19:49:54] <_dougg> This edelight mongo cookbook is such a POS. If I run chef twice on the first shard node, it becomes the primary.
[20:04:24] <Viesti> oops, wrong channel..
[20:06:53] <q85> Upgrading to 2.4 here. We have our config servers running on the same host as our replica sets. The upgrade instructions state you should upgrade the config servers first then the mongod instances. My question is can I replace the mongod binary while mongod is still running?
[20:08:31] <cheeser> on anything but windows, i'd expect that answer to be yes.
[20:09:05] <q85> cheeser: thanks for the reply. We are running linux though. I should have mentioned that.
[20:14:12] <cheeser> should be fine, then.
[20:22:44] <q85> cheeser: Do you know if yum remove will stop the mongod process?
[20:25:48] <cheeser> i don't
[20:28:41] <q85> cheeser: thank you for your help. I really appreciate it.
[20:43:01] <ehershey> shoot I missed that
[20:43:10] <ehershey> I'm pretty sure yum remove will not stop the mongod process
[20:55:12] <_dougg> Might as well ask again... is there a way to deploy an entire mongo cluster in an automated fashion?
[20:59:31] <_dougg> Is there a way to deploy an entire mongo cluster in an automated fashion?
[21:11:13] <ehershey> _dougg: https://mongodb-documentation.readthedocs.org/en/latest/ecosystem/tutorial/automate-deployment-with-cloudformation.html
[21:11:35] <ehershey> I'm not sure how up to date or complete that is
[21:11:47] <ehershey> it may just be a good starting point
[21:12:53] <moleWork> _dougg, how's it goin
[21:13:51] <NodeJS___> Does anybody could resolve my problem which I'm faced in MongoDB http://stackoverflow.com/questions/24089031/mongodb-how-to-make-ordering-of-documents-but-not-array
[21:23:54] <fraterlaetus> what's wrong with this query?
[21:23:56] <fraterlaetus> mongodump -d databasename -c collection -q "{ created_at: { \$gte: new Date('06/01/2014') }}"
[21:27:44] <ehershey> fraterlaetus: I don't think you can create a date object in your query like that
[21:31:58] <_dougg> ehershey: looking
[21:33:01] <_dougg> ehershey: thanks.... before I read all that, how does it find nodes?
[21:35:01] <_dougg> ehershey: that doc talks about installing a single node I think.... I'm trying to deploy an entire cluster
[21:37:08] <_dougg> and the links to the templates are broken!
[21:37:19] <amcrn> _dougg: http://docs.mongodb.org/ecosystem/_images/MongoDB_ReplicaSetStack.template
[21:38:15] <fraterlaetus> hershey: thanks!
[21:41:04] <_dougg> amcrn: thanks... that also links to s3 locations I can't get to
[21:42:31] <moleWork> _dougg, this is totally working for me... except i slightly modified it to pass in rs_nodes instead of querying it.. and i'm not doing any of the sharding... just replicaset but i can bring up 1,2,3 doesn't matter... so far it has "just worked" https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/definitions/mongodb.rb <---
[21:43:14] <fraterlaetus> hrm. so assuming "created_at" is an ISODate, why would -q "{ created_at: { \$gte: ISODate('2013-09-10 21:50:05.844Z') }}" fail to parse?
[21:43:31] <moleWork> missing the T
[21:43:36] <moleWork> between the date and time
[21:43:45] <moleWork> i think
[21:43:51] <fraterlaetus> I tried that already.
[21:43:55] <fraterlaetus> :(
[21:44:11] <_dougg> moleWork: worth a try...
[21:44:21] <fraterlaetus> mongodump -d db -c collection -q "{ created_at: { \$gte: ISODate('2013-09-10T21:50:05.844Z') }}"
[21:44:22] <_dougg> moleWork: the cloudformation approach is somewhat appealing
[21:44:26] <moleWork> i've been banging on this all day and i only slightly modified it
[21:44:37] <fraterlaetus> assertion: 16619 code FailedToParse: FailedToParse: Bad characters in value: offset:21
[21:44:38] <moleWork> yeah cloudformation will bring up a mongodb cluster no problem
[21:44:48] <moleWork> once
[21:45:08] <moleWork> but in ops works i can just click and add nodes to layers
[21:45:11] <_dougg> yah, i was thinking that
[21:45:13] <moleWork> and they become secondarys magically
[21:45:54] <moleWork> https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/recipes/opsworks_replicaset.rb
[21:45:59] <_dougg> moleWork: I'll give your code a try... stay tuned... thans
[21:46:05] <moleWork> i also modified this cause i didn't like that he was using the replicaset name as the layer name
[21:46:08] <moleWork> this isn't my code
[21:46:18] <moleWork> i've just been working on your exact problem for 2 weeks now
[21:46:24] <moleWork> this is where i'm at
[21:46:29] <_dougg> (you can update stacks tho...)
[21:46:41] <moleWork> i don't know what you mean
[21:47:38] <_dougg> moleWork: if you use cloud formation, you can change the template, and call update() and it .... should.... work....
[21:47:51] <moleWork> ah /me shrugs
[21:48:00] <moleWork> haven't gotten into cloud formation yet
[21:48:08] <moleWork> i started at elastic beanstalk + hosted mongo
[21:48:23] <moleWork> which is what i'm currently using, but now researching how i could run my own clusters
[21:49:14] <moleWork> opsworks seemed apealing but it's killed me for over a week now because of small gotacha all over the place
[21:49:19] <moleWork> but "it works"
[21:51:36] <moleWork> on thing that's cool is that i can add a node to my mongodb replicaset and then the app can magically be reconfigured and reseeded with the new hosts of the replicaset
[21:52:04] <moleWork> i wasn't too sure how i was gonna have my app "discover" the new database servers before
[21:52:12] <moleWork> and this solves that problem
[21:53:05] <_dougg> moleWork: your code... epic fail ... "uninitialized constant Chef::ResourceDefinitionList::OpsWorksHelper"
[21:53:23] <moleWork> which line?
[21:53:58] <_dougg> like... chef ever makes that easy?
[21:55:30] <moleWork> okay... i don't know much about chef but .... https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/libraries/opsworks_helper.rb i changed "class Chef::ResourceDefinitionList::OpsWorksHelper" to "module OpsworksHelper" then the https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/recipes/opsworks_replicaset.rb line ::Chef::Recipe.send(:include, MongoDB::OpsWorksHelper) i changed to "::Chef::Recipe.send(:include, Opswo
[21:55:31] <moleWork> rks)"
[21:56:19] <moleWork> not sure why people define their libraries as ResourceDefinitionLists.... but i'm not a ruby expert... it looks like they are sorta doing a HWRP as a library but i don't know what a ResourceDefinitionList is
[21:56:35] <moleWork> trying to figure it out i think requires reading chef source code
[21:56:43] <moleWork> or some how knowing more about ruby than i do
[21:57:43] <moleWork> the key is... the actual mongodb resource definition logic in this seemed to work for me... once i got passed all of the nuances of the rest of the recipie
[23:09:34] <_dougg> hm