[05:57:33] <aps> Is Ops Manager a paid service or can I install it on my infrastructure?
[06:16:36] <Boomtime> aps: the download of ops manager is for 'evaluation' only and wants contact info to talk to a sales rep, so no it's a paid product
[06:18:03] <mbuf> when I restore a replica set into a new node, I still see the old members when I do rs.conf(), even though when I start the new instance with a different replica set name in /etc/mongod.conf. How can I remove the old members with force?
[06:24:00] <joannac> mbuf: starting a new replica set?
[06:35:49] <mbuf> joannac: you have saved my day, twice
[07:46:25] <aps> How much does ops manager cost? My main use-case is taking backups
[07:48:27] <aps> How much does ops manager cost? My main use-case is taking backups.
[07:48:48] <joannac> aps: I can almost guarantee it's not cheaper than Cloud Manager backups
[07:49:12] <joannac> aps: (if I'm remembering the right conversation we had)
[07:49:35] <aps> How much does ops manager cost? My main use-case is taking backups.
[08:31:18] <aps> sorry for repeated messages, there was a network delay
[08:32:19] <aps> joannac: Even if I host it on my own infrastructure?
[10:43:55] <NoReflex> hello! I have some slow queries so I added a new index to make them faster. Up until now the query used a large index (1.2 GB) so I added a new sparse index (47 MB).
[10:44:36] <NoReflex> I can contruct my query so that it uses one index or another but the query runs just as slow
[10:45:06] <NoReflex> I'm suspecting the new index is not being used so I checked the logs
[10:45:36] <NoReflex> so in the log I can see planSummary: IXSCAN { xti: 1.0, xrt: 1.0 } - this is the new index
[10:45:58] <NoReflex> and planSummary: IXSCAN { ti: 1, mo: 1 } (this is the old index)
[10:52:26] <NoReflex> I'm using MongoDB 3.0.5 on Arch, WT with snappy on a notebook with i7 3517U CPU, 10 GB RAM, HDD 750 GB and the database size reported by show dbs is 10 GB
[14:29:28] <xcesariox> cheeser: how do i fix this?
[14:38:19] <xcesariox> cheeser: okay i added the function but now it prompts this error https://eval.in/416381
[14:39:04] <hypfer> hello, I have a simple aggregate group query and I'd like to multiply the average value by 8760 and then divide it by 1000. here's the current query: http://paste.debian.net/291966/
[14:39:25] <cheeser> xcesariox: yeah, that's a bit beyond my js knowledge...
[14:40:04] <cheeser> xcesariox: i'm pretty sure you can't define a function as a function declaration parameter. you can *invoke* a function that that, though.
[14:50:55] <xcesariox> cheeser: i copied the whole function so that you know what i was trying to do.
[14:51:02] <xcesariox> cheeser: i did define those stuff else where.
[15:11:06] <bogen-work> I'm trying to use amid-rest. I'm able to query and get. When I post a new document is created, but the supplied json string, while not reject by the request is ignored. amid-rest returns the location (an empty document apart from the assigned _id) and waits for the next request.
[15:11:17] <bogen-work> I'll provide a link shortly to a paste of what I'm sending
[17:31:03] <gcfhvjbkn> this is my take, which doesn't work to well because i have to "case" every json type manually and even then you need to serialize Arrays and Documents in a special manner
[17:31:36] <gcfhvjbkn> you'd think the api will provide a method that builds the whole document tree for you
[17:37:35] <lufi> hi, is there something wrong with my query? http://pastebin.com/dybHbT8Z .I am getting an "invalid operator $dateToString" error
[18:51:54] <lucha> Is there a way to generate a many to many rellationship between two collections in mongoose?
[19:17:03] <Pinkamena_D> I am trying to store a field in a document which contains a type of mapping from filenames to file hashes. The majority of queries will be giving the file name of one file and wanting back a mapping of all filenames to file hashes in that document. Because I can not use filenames as the keys directly, what is the best way to store this data?
[19:17:47] <Pinkamena_D> I can think of many possibilities, such as using an array of documents, or using two parallel arrays.
[19:18:08] <Pinkamena_D> But any suggestions from experienced people?]
[19:40:23] <gcfhvjbkn> ok, another try: how do you represent this structure with java-mongo-driver?
[20:43:11] <jpfarias> is there a way to speed up $all queries on a field that is already indexed?
[20:43:17] <jpfarias> the field is an array of strings
[21:02:13] <happiness_color> keep another (indexed) field with concatenated values?.. :/ nothing better comes to mind
[21:44:33] <Petazz> This is somewhat offtopic to this channel but
[21:44:41] <Petazz> So I just set mongoengine to use replicasets by setting a keyword arg replicaSet="rs0", so far so good. Then I tried going into the primary and calling rs.stepDown() so a new primary was elected
[21:44:46] <Petazz> This caused all to go haywire :/
[21:44:49] <Petazz> First it gave me a pymongo.errors:AutoReconnect: connection closed