[01:25:26] <dman777_alter> hi, anyone use mongoose in here?
[01:25:31] <dman777_alter> http://bpaste.net/show/4I7LDFQa3kKxbcP5kzrn/ I'm new to mongoose and mongodb. I have a static field metrics : [ { duration : "15m"...when writing the function that creates the document should I include it also as I have in the paste?
[02:17:07] <dman777_alter> http://bpaste.net/show/kAcBu9SK1kDZDxM1jku3/ ok, my first attempt to save a document. but it's not saving :(
[02:24:49] <dman777_alter> joannac: no, because none of the cpu.save() is not running. I also went into mongo shell and db.cpu.find() didn't return anything
[02:25:19] <dman777_alter> I don't know why cpu.save() will not run
[02:28:35] <dman777_alter> it's cool...going to take a break. thanks though
[04:08:57] <Cache_Money> I'm on a MBP. I usually start the Mongo daemon using $ mongod But now I'm getting this error: ERROR: dbpath (/data/db) does not exist.
[04:09:17] <Cache_Money> how would I find out where my mongodb data directory lives?
[04:11:44] <cheeser> did you install using homebrew?
[04:12:05] <joannac> if you know a database name, you could try look for "dbname.ns"?
[04:25:23] <Cache_Money> I ended up just creating the directory
[04:25:44] <Cache_Money> I know I used to have other collections in my mongodb before but oh well...
[07:26:01] <gancl> Hi! How to update or insert subdocument's array? http://stackoverflow.com/questions/24075910/mongoose-cant-update-or-insert-subdocuments-array
[07:27:22] <narutimateum> haha..same as my question
[07:51:50] <narutimateum> oh well.. i guess i have to use push and pull...
[07:52:27] <narutimateum> 2nd question... do i put reference in subdocument like this "_id" : ObjectId("538ef097576f1f7d505c2ebc"), or "_id" :"538ef097576f1f7d505c2ebc",
[08:06:24] <narutimateum> so i have a problem where i dont know actually i get this objectId type to register into mongo
[08:06:36] <narutimateum> im using mongodb eloquent query builder kind of stuff
[09:21:53] <djlee> Would i be right assuming that if i wanted to scale mongodb horizontally on AWS EC2, i would have to have my application code write to the master mongo/ec2, and place the slaves behind a load balancer and have my application do reads from the load balancer
[09:29:54] <kali> djlee: this behaviour is more or less built in the mongodb clients
[09:35:07] <djlee> kali: Ah, just noticed primary and secondary replica set members in the PHP MongoClient class, must be what you are referring too. Seems to only relate to reading rather than write, but i'll ask google, as that must be the stuff i need to look at
[09:35:55] <kali> djlee: write always go the primary.
[09:36:45] <kali> djlee: you should not have to interact with these classes anyway, but use the ReadPreferences system instead: you can specify at the client scope, database, collection or even query scope where you want to read to go
[09:37:17] <kali> djlee: and DONT try putting a replica set behind a tcp load balancer. it will not work
[09:38:32] <_boot> if I start updating shards/replica set members to 2.6 from 2.4.9, is there likely to be any problems?
[09:40:18] <djlee> kali: ah right, load balancer idea is out the window then. No doubt i need some sort of sentinel/manager, which monitors slaves and i can use to query and return a suitable slave IP (dont worry, i'll happily google it, clearly got a lot to learn quickly :P)
[09:41:40] <djlee> kali: i'll read the mongodb docs on replication e.c.t. clearly they've implemented most of the stuff i thought id have to manually manage!
[09:41:47] <kali> djlee: the only thing you need is some kind of out-of-band monitoring to allow you to know that your replica is degraded
[09:42:04] <kali> djlee: mongodb is 21st century database :)
[09:43:18] <djlee> kali: yeah im still new to the nosql world. Spent my entire career on single MySQL hosts. Now i've got an api, mongo and redis instance that all needs to scale well, in the cloud, by next friday, and have been load tested (god i love client deadlines)
[09:43:41] <djlee> kali: i'll grab a coffee and get reading :D thanks for your help!
[09:50:59] <Nodex> pretty cool, aside the java requirement
[09:51:33] <kali> Nodex: you mean "java" or "java 8" ?
[09:51:46] <Nodex> strangely I employ a similar tactic for Mongo to solr mappings and de-normalisation
[09:52:29] <Nodex> all my mappings are simply json objects that are run through a parser that cleans / maps / parses things to build an eventual object for solr/mongo
[09:53:48] <Nodex> This work is free. You can redistribute it and/or modify it under the terms of the Do What The Fuck You Want To Public License, Version 2, as published by Sam Hocevar. See the COPYING file for more details.
[10:58:30] <rainerfrey> I get errors on the mms support form, so I ask here: what's up with MMS UI requesting to set up two-factor auth? I used to be able to access my MMS account without, but all links are redirecting to the setup form ...
[12:27:08] <gancl> Hi! How to let mongoose update or insert subdocument's array? http://stackoverflow.com/questions/24075910/mongoose-cant-update-or-insert-subdocuments-array
[12:47:20] <Nodex> sub documents can be accessed with dot notation
[13:07:44] <gancl> nemux: I know it. But it's in the Array, it returns all items in a array, I just want to modify only one item
[13:26:46] <dhm_edx> gridfs and I have been fighting for 2 days. I issue queries which take hours but show almost no activity other than locks.
[13:27:11] <dhm_edx> example query: var extra = db.fs.chunks.find({n: {$gt: 0}}); extra.length();
[13:28:26] <dhm_edx> what I'm trying to accomplish is that we changed the key order in our subdocs we use for files_id and I'm trying to find the chunks with the intermittentlly screwed up order and fix their files_id
[13:29:14] <kali> dhm_edx: an index on "n" should help this query. also try using count({..}) instead of find({ ...}).length
[13:29:59] <dhm_edx> i'm going to iterate over extra. I just wanted to see how many records I'm dealing w/ as they tell my users how many images are broken in current courses
[13:30:01] <kali> dhm_edx: but don't expect a miracle. gridfs schema was not designed for that
[13:35:58] <dhm_edx> my mongo monitoring console shows ~ 100% lock ratio but only about 3 command/sec and about 1 write/sec (I have another query updating 284 of the entries)
[13:36:36] <dhm_edx> that query is : for (var i = 0; i < found.length; i++) {
[16:19:20] <_dougg> i'm still waiting on finding out if there's a way to easily automate deployment of a mongo cluster.
[16:26:45] <federated_life1> _dougg: its pretty easy already , there should be an announcement at mongo world with some automation tools similar to MMS
[16:40:51] <achiang> hello, i have successfully configured a collection to have a 2dsphere index, and geospatial queries such as geoNear already work. question is -- is there a "normal" search to see if the collection already contains a coordinate pair?
[16:41:31] <achiang> use case is preventing duplicate data from being inserted, so search collection for the coordinates first, if they exist, then do not insert
[16:47:56] <heewa> another aproach is to insert, check for dupe & remove yours. easier in some cases, if you don’t mind having dupes for a short period. (like in complex, multi-document, multi-collection scenarios)
[16:52:21] <NodeJS_> dhm_edx: Let's say I have the following documents in collection: {_id: ObjectId(1), title: ''}, {_id: ObjectId(2), title: ''}, {_id: ObjectId(3), title: ''}, how can I get it in specific order?
[16:52:58] <_dougg> Is there a good way to deploy an entire cluster? The community chef cookbook designed for the task is pretty flaky. Enterprise app... no way to automagically deploy?
[16:53:29] <dhm_edx> ah, by definition, order of persistence for separate objects does not guarantee order of retrieval; however, since ObjectId encodes the time stamp, you can sort by the implied time stamp I believe
[16:54:48] <NodeJS_> dhm_edx: in my case I need to insert new documents in specific position
[17:01:50] <moleWork> _dougg, i'mn currently working on this problem myself
[17:02:32] <moleWork> this cookbook is based on old community cookbook i thin
[17:02:57] <moleWork> but i'm spining up mongodb replicaset's with it on opsworks http://netinlet.com/blog/2014/01/18/setting-up-a-mongodb-replicaset-with-aws-opsworks/
[17:05:34] <moleWork> i also have a similar vagrant setup so i can spin up dev clusters that closely match production cluster on aws
[17:09:20] <og01_> Hi, i've come across something that i don't know how to do, i've pastebined an example: http://pastebin.com/BTQPgBvB
[17:09:20] <_dougg> moleWork: you aren't having problems with it? I can reproduce bring up 2 replicas in a shard, but bringing up the third... chef fails
[17:09:47] <_dougg> moleWork: i have to add the third node manually in the primary replica, and then go back and run chef
[17:09:57] <moleWork> i don't have any shards sorry.... just replicaset.... i do have a 3 node replicaset no problem though
[17:10:02] <moleWork> and i can just add a node and it will join
[17:10:04] <og01_> I want to aggregate on the max time of one field while pulling out the another field from that max
[17:10:37] <_dougg> yah *sigh* sharding makes life complicated
[17:10:48] <moleWork> well i'd guess it's a timing thing
[17:10:53] <og01_> please take a look at my paste, I might not be explaining my question well, i think the paste makes it clearer
[17:13:10] <tscanausa> _dougg: I actually rebuilt the chef cookbooks for my needs for both replica sets and shards.
[17:13:12] <_dougg> wait, I haven't enabled auth yet
[17:13:31] <_dougg> tscanausaL don't suppose you contributed that back?
[17:14:04] <moleWork> there's a million forks of these mongodb cookbooks
[17:14:20] <_dougg> moleWork: none contributed back that I've seen
[17:14:21] <tscanausa> _dougg: unfortunately no. i just just started from scratch.
[17:14:25] <moleWork> and chef seems pretty complicated soo many subtlties to timing and how it's works
[17:15:00] <_dougg> well I'm open to alternatives. Mongo is supposed to be an enterrprise app. An enterprise app should have a way to automate deployment of a cluster
[17:15:16] <moleWork> well there's the main edelight one... then the parsops one https://github.com/ParsePlatform/Ops/tree/master/chef/cookbooks/mongodb
[17:25:16] <_dougg> I think the problem with the Edelight cookbook is that it works with 2 nodes because the first one has nothing to do, the second one becomes the primary, and the third one... well I just think they never tested that
[17:27:35] <og01_> Im sure what im trying to do must be possible, can someone take a look at my pastebin example? http://pastebin.com/BTQPgBvB
[17:27:50] <moleWork> https://github.com/edelight/chef-mongodb/blob/master/definitions/mongodb.rb would try to debug whether or not your query on line 204-210 brings you back all three nodes
[17:28:12] <og01_> im trying to think of ways to project the data, or use multiple stages in the aggregation pipeline, but i can't work it out
[17:28:25] <_dougg> moleWork: reading the docs on mongodb. even that says you had to add nodes via the primary. That's never going to work with chef.
[17:28:49] <moleWork> yeah the primary has to be up for sure
[17:29:05] <_dougg> moleWork: you add the third node, and bring it up. Chef fails because it's trying to run rs.add on a non primary node. You have to go back to the primary and run the add by hand and then chef will run ok on the third node.
[17:29:24] <_dougg> moleWork: it works on the second node because it makes that the primary
[17:29:40] <_dougg> (because there isn't a primary yet)
[17:35:16] <tscanausa> this is the ruby code I use to manage replica set https://gist.github.com/anonymous/1910558ada1354d101c9
[17:35:42] <moleWork> https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/libraries/mongodb.rb if you look at line 138 onwards in this file
[17:38:04] <dhm_edx> NodeJs_ once again, afaik, position is a nonsensical notion wrt dbs; however, if you have the notion of an order, then just add a field to represent that order and give it the serial # you want.
[17:40:09] <fschuindt> hey guys, there is a way to set a default mongo config file? I always need to type $ mongod --config-file ~/mongo/mongodb.conf I want just to type mongod then go
[17:40:11] <moleWork> and depends if it's a member or not
[17:40:57] <moleWork> when you connect to it as a replicaset, even with partial host list i believe it seeds the connection so you get to the primary (i think)
[17:41:21] <_dougg> when I connect with the same options via the cli shell, I get local, not primary
[17:44:40] <moleWork> yeah this code big and ugly with huge else branches but if you read the code starting with " # remove removed members from the replicaset and add the new ones" you can see how it reconfigures the replicaset starting with the old members that already exist
[17:45:51] <_dougg> moleWork: added some more debug. looks like it IS connecting to ... wait...
[17:46:18] <_dougg> oh shit. it's connecting to the first node, which is second, not the primary. WTF?
[17:48:12] <moleWork> if you bring up the nodes with explicit priorities you can then know exactly which one will come up as primary
[17:49:00] <_dougg> moleWork: looks like the cookbook lets you set a priority.
[17:50:31] <tscanausa> you only need priority if you are having replica sets out side of your normal "cluster"
[17:50:34] <moleWork> if you need to do any thing it basically has to be done from primary so setting priorities makes it so you don't have to guess at which one is primary
[17:50:51] <fraterlaetus> Hi. Is it possible to use mongodump or a similar utility to dump only the last 24 hours of data? I'm trying to reduce the size of the dataset I use for testing.
[17:51:46] <moleWork> fraterlaetus, mongodump has a query parameter
[17:51:52] <_dougg> ctscanausa: unless your using a broken cookbook
[18:24:25] <moleWork> [2014-06-06T16:17:19+00:00] ERROR: aws_ebs_volume[sdi1] (/var/lib/aws/opsworks/cache/cookbooks/aws/providers/ebs_raid.rb line 339) had an error: RuntimeError: Volume no longer exists
[18:24:25] <tscanausa> I make the statement that a lot of the mongo cookbooks out there are crazy complicated because of mongodb's system architecture.
[18:24:36] <moleWork> yeah this has nothing to do with mongodb yet
[18:24:51] <moleWork> just trying to bring up a raid on opsworks through chef config and not through the web ui
[18:25:00] <tscanausa> moleWork: I am saying go into the #chef irc and ask "Does aws/provider/ebs_raid work on opsworks?"
[18:51:56] <moleWork> something in those cookbooks must rely on some "passing of state" that simply can't work with chef-solo
[18:52:18] <moleWork> well, at least that's my hunch as tscanausa basically pointed out
[18:55:01] <moleWork> one thing i actually should of check if the source for the aws cookbook in the cache matches that of the one on the opscode-cookbook repo
[18:56:00] <tscanausa> moleWork: ya that one of the fun things my infrastructure runs on 3 clouds so I have to stay away from the proprietary stuff.
[18:56:20] <_dougg> moleWork: i wouldn't do it that way. unless your careful, chef could blow away your disks. I know from experience. I'd write a script that runs on boot and never runs gain
[18:56:38] <moleWork> yeah, i mean i'm fulling willing to role custom if my boss gives me goahead but i don't want to support something i don't have to if something exists that "just works"
[18:57:16] <moleWork> _dougg, well, i only think "setup" runs once and the cookbook checks to see if the volume already exists
[18:57:39] <moleWork> and technically nodes should be ephermeral so if it blows out my disks... it'd be a good test to see how i handle it
[18:57:50] <_dougg> moleWork: ya sure setup won't execute eveyr time chef runs?
[18:58:32] <moleWork> it basically does what you say... runs once on first boot
[18:59:11] <moleWork> the configure event is the one that bbugs me cause i'm worried it'll force mongodb primary to drop all connections on a reconfigure in some situations
[19:18:55] <Viesti> yay, got the dictionary key replacing up to speed with python
[19:23:45] <Viesti> maybe in an odd way, using reduce-kv for just iteration, and doing mutation while traversing the transient map: http://pastebin.com/EA2yhtcD
[19:30:55] <_dougg> if I can't get mongo automated soon i should start looking for a new job. :(
[19:32:14] <cheeser> luckily a job search *can* be automated.
[19:33:48] <cheeser> trying to find that silver lining...
[19:40:14] <modcure> I created a sharded cluster and a database with some collections. i restarted the server and started the sharded cluster the data is gone.. any thoughts?
[19:49:54] <_dougg> This edelight mongo cookbook is such a POS. If I run chef twice on the first shard node, it becomes the primary.
[20:06:53] <q85> Upgrading to 2.4 here. We have our config servers running on the same host as our replica sets. The upgrade instructions state you should upgrade the config servers first then the mongod instances. My question is can I replace the mongod binary while mongod is still running?
[20:08:31] <cheeser> on anything but windows, i'd expect that answer to be yes.
[20:09:05] <q85> cheeser: thanks for the reply. We are running linux though. I should have mentioned that.
[21:13:51] <NodeJS___> Does anybody could resolve my problem which I'm faced in MongoDB http://stackoverflow.com/questions/24089031/mongodb-how-to-make-ordering-of-documents-but-not-array
[21:23:54] <fraterlaetus> what's wrong with this query?
[21:41:04] <_dougg> amcrn: thanks... that also links to s3 locations I can't get to
[21:42:31] <moleWork> _dougg, this is totally working for me... except i slightly modified it to pass in rs_nodes instead of querying it.. and i'm not doing any of the sharding... just replicaset but i can bring up 1,2,3 doesn't matter... so far it has "just worked" https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/definitions/mongodb.rb <---
[21:43:14] <fraterlaetus> hrm. so assuming "created_at" is an ISODate, why would -q "{ created_at: { \$gte: ISODate('2013-09-10 21:50:05.844Z') }}" fail to parse?
[21:51:36] <moleWork> on thing that's cool is that i can add a node to my mongodb replicaset and then the app can magically be reconfigured and reseeded with the new hosts of the replicaset
[21:52:04] <moleWork> i wasn't too sure how i was gonna have my app "discover" the new database servers before
[21:52:12] <moleWork> and this solves that problem
[21:53:58] <_dougg> like... chef ever makes that easy?
[21:55:30] <moleWork> okay... i don't know much about chef but .... https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/libraries/opsworks_helper.rb i changed "class Chef::ResourceDefinitionList::OpsWorksHelper" to "module OpsworksHelper" then the https://github.com/netinlet/chef-mongodb_replicaset/blob/master/mongodb/recipes/opsworks_replicaset.rb line ::Chef::Recipe.send(:include, MongoDB::OpsWorksHelper) i changed to "::Chef::Recipe.send(:include, Opswo
[21:56:19] <moleWork> not sure why people define their libraries as ResourceDefinitionLists.... but i'm not a ruby expert... it looks like they are sorta doing a HWRP as a library but i don't know what a ResourceDefinitionList is
[21:56:35] <moleWork> trying to figure it out i think requires reading chef source code
[21:56:43] <moleWork> or some how knowing more about ruby than i do
[21:57:43] <moleWork> the key is... the actual mongodb resource definition logic in this seemed to work for me... once i got passed all of the nuances of the rest of the recipie