[00:45:59] <bobinator60_> is there a way to $order_by a result set when the sort key is in an array like this: https://gist.github.com/rbpasker/68cd72645f7a51c7d0e9 ?
[01:07:17] <bobinator60_> no order_by love for bobinator60?
[02:19:07] <stephenmac7> Assuming the data in a document is not sensitive, is it secure to output the entirety of a document in an API (including ids and id references to other, possibly sensitive documents)
[02:37:23] <mrapple> so, i have a collection that contains user stats
[02:37:39] <mrapple> these stats are calculated via map/reduce and dumped into a collection
[02:37:42] <mrapple> here's a sample http://pastebin.com/MMRtY4NK
[02:37:59] <mrapple> i want to allow sorting by different time period/public families/stats
[02:38:05] <mrapple> however, mongo apparently has a cap of 64 indexes
[02:38:14] <mrapple> per collection* what's the best thing to do in this case?
[03:18:23] <timetocode> i've got a little matchmaking server with a table of players waiting for an opponent, and when a new player tries to find a game they take an opponent from the list of waiting players (i then delete this player from the list of waiting players) -- but because reading is nonblocking, and writing is blocking, does this imply that the 'next available player' could be read multiple times before I successfully delete it from the tab
[03:20:15] <timetocode> i was thinking that if i performed an arbitrary change onto the player before deleting it, this would lock that particular record, but im not sure if thats the mongo-ish way to get things done
[07:59:10] <foofoobar> so I have a "jobs" table. The most queried by fields are: location (2dsphere), author._id, category. I plan to create an index for them by using: db.jobs.ensureIndex({location:"2sphere", category:1}
[07:59:19] <foofoobar> Is that correct? and how to index the author._id ?
[08:52:11] <Sverre> is there any way to remove a document using mongoimport? Like, if I import a.csv, remove one line and then import it again the document will be deleted?
[08:54:28] <foofoobar> so I have a "jobs" table. The most queried by fields are: location (2dsphere), author._id, category. I plan to create an index for them by using: db.jobs.ensureIndex({location:"2sphere", category:1}
[08:54:32] <foofoobar> Is that correct? and how to index the author._id ?
[09:03:17] <aladdinwang> hi all, i am developing a django webapp.
[09:03:34] <aladdinwang> django integrated with mangodb with basic python driver or use django mangodb engine, which is better?
[14:27:27] <harenson> I mean, in the first query you get the document, then analyze it with Python or whatever you are using, then make the second query
[14:27:29] <revoohc> In a 3 member replica set, is there a way (without doing multiple stepDowns) to fail over to a specific host?
[16:54:05] <mrapple> so, i have a collection that contains user stats, calculated via map/reduce and dumped into a collection
[16:54:09] <mrapple> here's a sample document from the collection: http://pastebin.com/MMRtY4NK
[16:54:18] <mrapple> i want to allow sorting by different time period/public families/stats, however, mongo apparently has a cap of 64 indexes per collection.
[16:54:21] <mrapple> what's the best thing to do in this case?
[16:55:06] <millun> DefaultDBEncoder can't seem to encode my DBRef. (java)
[16:55:16] <starfly> mrapple: you can sort without indexes, just takes longer… :)
[16:55:40] <mrapple> starfly: i know, but with 500k documents, indexes are required
[17:07:59] <starfly> mrapple: yes, on map-reduce, try aggregation to see if that can substitute (best I can offer)
[17:10:07] <mrapple> starfly: why would nonAtomic be write locking?
[17:11:07] <starfly> mrapple: I haven't been in the guts of the map-reduce, can't tell ya...
[17:55:46] <n06> does anyone know of a chef cookbook out there that I can use to bake a mongodb clusteR?
[18:07:31] <millun> morning. i got a query in Java. specifically: Query query = new Query(Criteria.where("active").is(criteria.getActive()).andOperator(Criteria.where("animal").in(animals) where animals is @DBRef
[18:08:02] <millun> however, i can't seem to be able to execute 'find' with that query because Animal class can't be serialized.
[18:09:09] <millun> my guesses are running out of ideas. i am using spring-data-mongo. I don't have @Indexed on that @DBref and that @DBRef contains another @DBRef
[18:18:17] <Kzim> i have a problem, i miss an admin user on my mongos everything is good on each mongod but can't log on the mongos. is there a way to fix this without stop the service please ?
[18:53:10] <duncancmt> Hi! How can I print "ugly" json from the mongo shell? I've seen this http://stackoverflow.com/questions/14752450/mongodb-print-json-without-whitespace-i-e-unpretty-json but I really don't want to try removing whitespace with a regex
[19:02:13] <SpNg> I'm working on setting up MMS on an AWS VPC. My database server is not accessible outside of my VPC. What is the best way to setup MMS from one of the other nodes on the network?
[19:02:27] <duncancmt> nevermind, I figured it out
[19:03:16] <tystr> I dont' think MMS needs ingress
[19:03:39] <tystr> just needs to be able to connect to 10gen
[19:05:42] <SpNg> tystr: I have the database server setup to only communicate on my VPC, I have a fileserver and a web server that can communicate with the outside world. if I'm not mistaken, MMS needs access to the mongo log files. Is there a way to use MMS by connecting to the mongo daemon in my network?
[19:06:43] <starfly> Wish 10gen would allow local installations of the MMS stack, as they originally indicated they would
[19:06:43] <tystr> I *think* you can run the agent anywhere, as long as it can talk to your mongo instances and 10gen's url
[19:08:47] <SpNg> tystr: that would be great. I have not found the documentation for how to set that up. I have only seen the line nohup python agent.py > /[LOG-DIRECTORY]/agent.log 2>&1 &
[19:09:00] <SpNg> which my log-directory would be over the network
[19:09:26] <tystr> that log dir is jsut where the agnet logs it's output, not the mongo oplog files
[19:09:46] <SpNg> it says Replace “[LOG-DIRECTORY] with the path to your MongoDB logs.
[19:10:07] <SpNg> maybe I misinterpreted that piece. I read that as MMS is parsing the log files
[19:11:47] <tystr> I belive the agent actually connects to your mongod instances
[19:13:55] <SpNg> tystr: I feel like I would need to point the agent to the location on my network to connect to mongo. I will give this a run real quick and see what happens
[19:14:49] <tystr> "You can run the agent on any system that can connect to the MongoDB instances you want to monitor. As long as it can connect to each instance, you can use a single agent to do all the monitoring. Do be sure that the agent can make outgoing connections via HTTPS on port 443."
[19:19:19] <SpNg> tystr: I see it now. in MMS I can setup the hostname and port
[20:02:11] <tystr> how can I determine if my cpu load is coming from reads or writes?
[20:15:30] <SethKniceley> hi, everyone! I'm a Mongo n00b, and was wondering if someone could help me out with a question on date storage.
[20:16:00] <SethKniceley> I'm working in PHP, and the app I'm working on will require searching by dates and times for events that happen in the system
[20:16:34] <SethKniceley> e.g. created_on, last_updated_on, registered_on, deleted_on, etc
[20:17:14] <SethKniceley> I was planning on storing them using Date, but someone was telling me that that was a mistake and that I should only store dates in seconds since epoch.
[20:17:24] <SethKniceley> Is there a preferred best practice? pros and cons?
[20:23:11] <likarish> I like to use Date objects because they're human readable. Makes debugging easier but there may be reasons to use seconds since epoch too.
[21:11:15] <Skunkwaffle> If anyone has a second, I'm trying to figure out if it's possible to convert the output of an aggregation pipeline, which currently looks like: [{'value': 1, 'key': 'one'}, {'value':2, 'key': 'two'}, ...], into an object so it looks like: {one: 1, two: 2, ...}. Can anyone help explain to me if/how this can be done?
[22:26:14] <tystr> how would I go about figuring out if cpu is mainly reads or writes?
[22:26:43] <tystr> primary is chugging along at ~30-40% cpu usage, and we're expecting an influx of traffic in a few days
[22:27:13] <tystr> trying to track down if we need to just cache some heavy (yet indexed) queries,
[22:27:46] <tystr> our application is quite write heavy, so I suspect that's the cause, but I'd like to know for sure