[00:41:44] <johnisbest> Hey, if I use one giant mongo document, is there an easy way to only return a value two arrays deep? For example: http://pastebin.com/3KugYku9 Can I use a mongo query to find a pet object by the pet's ID?
[00:43:21] <johnisbest> I thought I could use the aggregation pipeline and use a redact but I do not know if that would work here
[00:44:10] <johnisbest> Also I want to only return the pet object and not most of the other data. I am hoping I dont need to waste time with looping through arrays to find the object
[03:13:45] <aldwinaldwin> goodmorning, goodevening, ... I have a question about a shard key that I try to use
[03:14:12] <aldwinaldwin> more specificly, what is not really adviced, but just trying to use an ISODate as shard key
[03:14:26] <aldwinaldwin> the documents are already created, the index {datetime:1} is created, all documents have a value for 'datetime'
[03:14:40] <aldwinaldwin> sh.shardCollection("testing.test",{datetime:1}) => "errmsg" : "found missing value in key { : null }
[06:51:34] <johnflux_> to fix the few records that are wrong
[06:51:43] <johnflux_> but it's been 8 hours now and it's still running
[06:52:06] <johnflux_> but when I do 'db.currentOP()' it doesn't show anything
[06:52:16] <johnflux_> task manager shows mongo using 10% cpu
[08:24:14] <vagelis> Hello, I have 2 fields for Date. Created and modified. If 1 document has created doesnt have modified and if 1 document has modified doesnt have created. How can i sorted these documents by Date ? I just use multiple keys in the sort? modified: 1, created: 1 ?
[10:10:13] <penthief> Hi, I have a CodecConfigurationException pasted here: http://pastie.org/10298155 Could someone tell me what I am doing wrong?
[10:23:33] <penthief> It would be nice if Filters.all supported streams.
[10:52:56] <m4k> I have this http://pastebin.com/548CiKEj document How do I filter by property_type I tried this db.meta.find({"specifications.$.property_type":"commercial"})
[11:37:54] <m4k> I have this http://pastebin.com/548CiKEj document How do I filter by property_type I tried this db.meta.find({"specifications.$.property_type":"commercial"})
[11:54:14] <Soapie> Hi. I'm fairly new to MongoDB (I'm from a SQL background). I'm using the c# .net driver and having trouble working out some of the queries. Am I in the right place to get help with the C# .NET MongoDB driver?
[11:58:23] <cheeser> kind of. not many of us are c# users from what I can tell. i work with the c# driver guys so I have a passing familiarity so might be able to help. the users list is probably your best bet but we'll do what we can here.
[12:01:03] <m4k> jelle: There is one document specifications and it has many subdocuments as dict. So Its not working with $in
[12:01:55] <dijack> one question...what's the difference between Priority 0 member not able to TRIGGER an election and being ABLE TO VOTE in an election?
[12:02:57] <m4k> jelle: I tried this db.meta.find({"specifications.property_type": { $in: ["residential"]}}) and db.meta.find({"specifications.$.property_type": { $in: ["residential"]}})
[12:05:05] <jelle> m4k: I can't insert your document
[12:12:00] <Soapie> Thanks @cheeser. At the moment, I'm struggling to do what, in sql terms, would be a datediff, within an aggreagtion. Basically I'm trying to get the number of days elapsed since the last event that a user attended. So you'd see a list of user records with the number of days since they last attended an event.
[12:13:22] <Soapie> I can't seem to subtract 2 dates in a projection
[14:19:39] <amitprakash> Hi, http://docs.mongodb.org/master//release-notes/2.6-compatibility/#index-key-length-incompatibility tells me that repairDatabase will fail on encountering the index too large issues
[14:19:50] <amitprakash> How can I repair an old database then?
[15:30:21] <ciwolsey> is there an easy way to check the size of a doc?
[15:50:45] <amitprakash> any ideas on the repairdatabase issue?
[16:58:20] <zwarag> Very noob question here. Filling a collection with the value of my cpu temperature every second. How do I to keep an average of the last 60 entrys stored?
[17:40:36] <daidoji1> zwarag: document dbs aren't very good for aggregating
[17:41:07] <daidoji1> zwarag: however 60 items is small, just make a capped collection, select all the data and average in a script?
[17:42:35] <daidoji1> can anyone help me improve this query? It looks like its taking a while to create my export file
[17:43:03] <daidoji1> a better way than this date comparison is what I'm asking for basically
[17:50:15] <amitprakash> Hi, http://docs.mongodb.org/master//release-notes/2.6-compatibility/#index-key-length-incompatibility tells me that repairDatabase will fail on encountering the index too large issues
[17:50:39] <amitprakash> How can I repair an old database that throws up too large to index errors on repairDatabase?
[18:33:41] <daidoji1> this fails to return all records (only returning about 20k). Would someone mind helping me with what I may be doing incorrectly?
[18:34:51] <blizzow> I upgraded my sharded cluster to 3.0.4 yesterday. After the upgrade, the load average on my secondaries for one shard is regularly going ballistic. I fired up mongotop and see a few collections returning really high read times (30000ms-110000ms). I see some query against one db.collection that's taking a long time...is there a way to figure out what the source of that query is?
[18:34:55] <blizzow> I have 4 CPUs on my mongod instances and both secondaries are pegged at 400%CPU.
[18:35:00] <blizzow> This was NOT happening when we were using 2.6.x
[18:35:02] <daidoji1> thats weird, when I run it in console with explain I get this error https://gist.github.com/daidoji/59c5fd1b3a3cac4f2fc6
[18:36:43] <daidoji1> google says to delete my mongod.lock??
[18:44:19] <deathanchor> you can use older client with newer server, but not vice-versa
[18:44:44] <daidoji1> isn't it usually the other way around?
[18:45:13] <deathanchor> the server is backward compatiable to older clients, but clients aren't backwards compatible to older servers
[18:45:31] <deathanchor> I believe it is documented somewhere that way
[18:46:26] <daidoji1> deathanchor: okay I'll take your word on it but it seems weird they would make that decision
[18:47:29] <deathanchor> it makes it safe to upgrade your servers then your clients
[18:48:42] <daidoji1> and thats why its weird. There's nothing I want to do more with an enterprise grade database server full of my business's most important information than upgrade it all the time.
[18:48:49] <daidoji1> nobody's ever had a problem doing that before
[18:49:04] <daidoji1> whereas clients are lightweight and can be replaced easily in the typical scenario
[18:51:48] <daidoji1> deathanchor: anyways thanks for your help, that probably is the issue come to think of it
[18:52:01] <deathanchor> I'm not sure, but if I have issues, I usually verify that I'm using like for like versions server/client.
[18:52:23] <deathanchor> if I can replicate it with that, then I know my setup or command is wrong.
[18:52:39] <deathanchor> daidoji1: also it is quiet here during the summer months.
[22:29:38] <rusty78> Sorry someone was at the door
[22:29:54] <rusty78> it's actually to store socket connections - basically when a user enters the website they connect to a socket
[22:30:09] <rusty78> I store their socket.id and username as a document on the collection,a nd when they leave I delete it
[22:30:17] <rusty78> However I reference these sockets rather often in the site
[22:30:37] <rusty78> but at the same time they get created/moved very fast, so would indexing cause a large performance drop if I used one?
[22:46:59] <falieson> I want to build my own analytics, reporting, and logging features for my app but I'm trying to decide if I should keep it in the same db? AnalyticsPages AnalyticsUsers AnalyticsEvents AnalyticsLogs or should I just have 1 collection Analytics? and do something like record_type: "pages, users, logs, events,"
[22:48:41] <falieson> i realize there's multiple questions about should I have the app and the analytics share the db, and if i should split parts of the analytics module into their own collections