[00:44:58] <tystr> with map/reduce, emit isn't "grouping" like it's supposed to
[02:27:05] <nemosupremo> okay I have a problem. I have 1 machine, in a replica set thats secondary. The other machine went down, but unfortunately was given a bad hostname so it can no longer connect to the first machine. I naively took the arbiter down before I realized I had this problem because I wanted to change its configuration. How can I reissue commands to rsconf on a secondary machine?
[02:32:20] <nemosupremo> figured it out {force : true}
[07:22:10] <EricL> Anyone have any experience upgrading from one to three config servers in a sharded environment?
[07:22:33] <EricL> Keep getting this error and Google isn't much help: "$err" : "could not initialize sharding on connection ip-10-5-25-231.ec2.internal:27016 :: caused by :: specified a different configdb!",
[07:38:03] <balboah> I got it once when trying to move a config server, I just re-copied the init script and it got fixed
[07:38:33] <EricL> We have like 40 mongos servers.
[07:45:51] <balboah> EricL: make sure --configdb specifies correct hostname as seen by the config server. I get that for example if I change the hostname to localhost instead of the specified hostname on the server
[07:46:25] <balboah> (both mongos and mongoconfig run on the same machine)
[08:07:00] <hazard_> does mongo support $in queries on shard keys and does if so does it do so by querying only the necessary shards in parallel
[11:30:40] <PDani> i have a collection which have documents like {a:3, b:4, payload: "foo"} where a and b are integers. i'd like to query with a single bson (one network roundtrip) every distinct a's with the greatest b value. is it possible?
[11:48:01] <remonvv> PDani, if you're 2.1+ you can do it with a single roundtrip through the aggregation framework. All other options available to you are suboptimal (not a single roundtrip, JS context based and thus slow, etc.)
[14:00:25] <PDani> could somebody give me an example of the usage of $first operator in aggregation framework?
[14:01:52] <FerchoDB> if I have a collection of documents which have an array of integers. Is it possible to search documents in which any of the numbers in the array is greater than X?
[14:13:14] <PDani> db.testc.aggregate({$sort: {b:1}, $group: {_id: "$a", b: {$last: "$b"}}}), why I get the same results as in db.testc.aggregate({$sort: {b:-1}, $group: {_id: "$a", b: {$last: "$b"}}})? I reversed the sort order, so the last seen element couldn't be the same...
[14:20:12] <hpg301> Hi... I am looking for a nosql db to replace one now implemented in bdbxml. I will be storing documents having a tree structure, like an xml doc. I'm new to MongoDB and have read some intro material but did not see how I can query a MongoDB db to get anything like an XPath query. I look forward to reading comments on how MongoDB could be used for my application.
[14:24:30] <c0815> MongoDB is not an xml database...if you want to store trees then look into a graph db or something similar. MongoDB is not very suitable here
[14:25:28] <balboah> hpg301: there are probably many things you can do in XPath that is not possible. An object can contain other objects in a tree-like structure, but it's not easy to query different "paths"
[14:26:18] <neil__g> @hpg301 you should store your tree to fit your data structure to the query and not the other way around
[14:26:32] <neil__g> that wasn't very good english
[14:27:03] <NodeX> you should always store data to make the query the most efficient
[14:27:11] <NodeX> everything else can be done app side
[14:28:57] <c0815> data should be stored in a reasonable consistent and coherent way....
[14:29:16] <c0815> adjusting your data model to ulimate speed is unlikely desirable
[15:18:48] <tazle__> has anyone experienced a replication failure where a secondary newly elected primary would continue attempting to replicate from another node and a secondary that formed the new cluster together with the primary would not attempt to replicate from the primary?
[17:24:38] <coopsh> what happens if a mongod --repair is started without --fork and the processed is interruppted (eg. ctrl-c)?
[17:24:52] <coopsh> is there a potential data loss? corruption?
[17:27:10] <c0815> unlikely since the original data should remain untouched