PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 17th of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:41:44] <johnisbest> Hey, if I use one giant mongo document, is there an easy way to only return a value two arrays deep? For example: http://pastebin.com/3KugYku9 Can I use a mongo query to find a pet object by the pet's ID?
[00:43:21] <johnisbest> I thought I could use the aggregation pipeline and use a redact but I do not know if that would work here
[00:44:10] <johnisbest> Also I want to only return the pet object and not most of the other data. I am hoping I dont need to waste time with looping through arrays to find the object
[03:13:45] <aldwinaldwin> goodmorning, goodevening, ... I have a question about a shard key that I try to use
[03:14:12] <aldwinaldwin> more specificly, what is not really adviced, but just trying to use an ISODate as shard key
[03:14:26] <aldwinaldwin> the documents are already created, the index {datetime:1} is created, all documents have a value for 'datetime'
[03:14:40] <aldwinaldwin> sh.shardCollection("testing.test",{datetime:1}) => "errmsg" : "found missing value in key { : null }
[03:14:52] <aldwinaldwin> mongos> db.test.find({ _id: ObjectId('55a86b5e8d79543ae1bf5989')})
[03:14:59] <aldwinaldwin> { "_id" : ObjectId("55a86b5e8d79543ae1bf5989"), "info" : "blablabla", "unixtime" : "2015-07-17 02:41:34.797672", "datetime " : ISODate("2015-07-17T02:41:34.797Z") }
[03:16:13] <aldwinaldwin> so... I don't really understand why the shardCollection function says there is a null value in the datetime
[03:51:25] <joannac> aldwinaldwin: "datetime " with a space != "datetime" with no space
[03:53:14] <aldwinaldwin> joannac: let me check
[03:53:46] <aldwinaldwin> joannac: oh, I'll try again, thx
[04:01:43] <aldwinaldwin> joannac: thank you soooo much, pfff, hours lost on this. oh well, lesson learned
[06:51:17] <johnflux_> I did this command:
[06:51:21] <johnflux_> > db.actionlaunch.find({"payload.rank" : {$exists : true}}).forEach( function(obj) { obj.payload.rank = new NumberInt( obj.payload.rank ); db.actionlaunch.save(obj); } );
[06:51:34] <johnflux_> to fix the few records that are wrong
[06:51:43] <johnflux_> but it's been 8 hours now and it's still running
[06:52:06] <johnflux_> but when I do 'db.currentOP()' it doesn't show anything
[06:52:16] <johnflux_> task manager shows mongo using 10% cpu
[08:24:14] <vagelis> Hello, I have 2 fields for Date. Created and modified. If 1 document has created doesnt have modified and if 1 document has modified doesnt have created. How can i sorted these documents by Date ? I just use multiple keys in the sort? modified: 1, created: 1 ?
[10:10:13] <penthief> Hi, I have a CodecConfigurationException pasted here: http://pastie.org/10298155 Could someone tell me what I am doing wrong?
[10:23:33] <penthief> It would be nice if Filters.all supported streams.
[10:52:56] <m4k> I have this http://pastebin.com/548CiKEj document How do I filter by property_type I tried this db.meta.find({"specifications.$.property_type":"commercial"})
[11:37:54] <m4k> I have this http://pastebin.com/548CiKEj document How do I filter by property_type I tried this db.meta.find({"specifications.$.property_type":"commercial"})
[11:52:10] <jelle> m4k: you have to use $in
[11:52:22] <dijack> .
[11:52:28] <dijack> what's a shard guys?
[11:52:33] <jelle> m4k: http://docs.mongodb.org/manual/reference/operator/query/in/
[11:53:00] <dijack> I understand that a shard can be disjoint data on separate collections
[11:53:04] <dijack> is that true?
[11:53:38] <dijack> so shard is like relational data in RDMDS?
[11:53:57] <jelle> no?
[11:54:04] <jelle> dijack: http://docs.mongodb.org/manual/sharding/
[11:54:14] <Soapie> Hi. I'm fairly new to MongoDB (I'm from a SQL background). I'm using the c# .net driver and having trouble working out some of the queries. Am I in the right place to get help with the C# .NET MongoDB driver?
[11:58:23] <cheeser> kind of. not many of us are c# users from what I can tell. i work with the c# driver guys so I have a passing familiarity so might be able to help. the users list is probably your best bet but we'll do what we can here.
[12:01:03] <m4k> jelle: There is one document specifications and it has many subdocuments as dict. So Its not working with $in
[12:01:21] <jelle> I don't see why
[12:01:32] <jelle> oh sigh
[12:01:55] <dijack> one question...what's the difference between Priority 0 member not able to TRIGGER an election and being ABLE TO VOTE in an election?
[12:02:19] <dijack> I guess I know what voting is
[12:02:23] <dijack> but, not sure what trigger is
[12:02:27] <dijack> can anyone explain?
[12:02:57] <m4k> jelle: I tried this db.meta.find({"specifications.property_type": { $in: ["residential"]}}) and db.meta.find({"specifications.$.property_type": { $in: ["residential"]}})
[12:05:05] <jelle> m4k: I can't insert your document
[12:06:05] <m4k> jelle: just add closing bracket
[12:06:09] <m4k> at the end
[12:08:26] <jelle> m4k: looks like that .$. does not work?
[12:09:18] <m4k> Yes and I'm not sure why
[12:09:59] <m4k> Does $ only works in projection
[12:10:09] <jelle> oh
[12:10:25] <jelle> then you have to use aggregation
[12:11:14] <m4k> Ok thanks
[12:12:00] <Soapie> Thanks @cheeser. At the moment, I'm struggling to do what, in sql terms, would be a datediff, within an aggreagtion. Basically I'm trying to get the number of days elapsed since the last event that a user attended. So you'd see a list of user records with the number of days since they last attended an event.
[12:13:22] <Soapie> I can't seem to subtract 2 dates in a projection
[14:19:39] <amitprakash> Hi, http://docs.mongodb.org/master//release-notes/2.6-compatibility/#index-key-length-incompatibility tells me that repairDatabase will fail on encountering the index too large issues
[14:19:50] <amitprakash> How can I repair an old database then?
[15:30:21] <ciwolsey> is there an easy way to check the size of a doc?
[15:34:48] <ciwolsey> nevermind, figured it out
[15:50:45] <amitprakash> any ideas on the repairdatabase issue?
[16:58:20] <zwarag> Very noob question here. Filling a collection with the value of my cpu temperature every second. How do I to keep an average of the last 60 entrys stored?
[17:40:36] <daidoji1> zwarag: document dbs aren't very good for aggregating
[17:41:07] <daidoji1> zwarag: however 60 items is small, just make a capped collection, select all the data and average in a script?
[17:42:35] <daidoji1> can anyone help me improve this query? It looks like its taking a while to create my export file
[17:42:37] <daidoji1> nohup mongoexport --host mongotest.data.revup.com -d revup -c st --query '{date: {$gte: Date(1420070400000)}}' 2> st_export.log | xz > /mnt/st_export_$(date "+%Y%m%d").xz &
[17:43:03] <daidoji1> a better way than this date comparison is what I'm asking for basically
[17:50:15] <amitprakash> Hi, http://docs.mongodb.org/master//release-notes/2.6-compatibility/#index-key-length-incompatibility tells me that repairDatabase will fail on encountering the index too large issues
[17:50:39] <amitprakash> How can I repair an old database that throws up too large to index errors on repairDatabase?
[18:00:22] <diegoaguilar> Hello Im trying mongo --quiet production-season-lf-199-2015 --eval 'printjson(db.groupg18.find({_id: ObjectId("55a550103711fed46621fd97")}, {calendar: true}).pretty())' > c.js
[18:00:32] <diegoaguilar> but c.js instead of results contains a lot of mongo js code
[18:00:35] <diegoaguilar> what am I doing wrong?
[18:18:50] <daidoji1> diegoaguilar oh yeah not that way
[18:19:08] <daidoji1> try db.groupg18.find({_id: ObjectId("55a550103711fed46621fd97")}, {calendar: true}).forEach(printjson)
[18:19:36] <daidoji1> the way you're doing it is calling printjson on the cursor, which just prints out the cursor definition probably
[18:33:18] <daidoji1> nohup mongoexport --host mongotest.data.revup.com -d revup -c st --query '{date: {$gte: new Date(1420070400000)}}' 2> st_export.log | xz > /mnt/st_export_$(date "+%Y%m%d").xz &
[18:33:41] <daidoji1> this fails to return all records (only returning about 20k). Would someone mind helping me with what I may be doing incorrectly?
[18:34:51] <blizzow> I upgraded my sharded cluster to 3.0.4 yesterday. After the upgrade, the load average on my secondaries for one shard is regularly going ballistic. I fired up mongotop and see a few collections returning really high read times (30000ms-110000ms). I see some query against one db.collection that's taking a long time...is there a way to figure out what the source of that query is?
[18:34:55] <blizzow> I have 4 CPUs on my mongod instances and both secondaries are pegged at 400%CPU.
[18:35:00] <blizzow> This was NOT happening when we were using 2.6.x
[18:35:02] <daidoji1> thats weird, when I run it in console with explain I get this error https://gist.github.com/daidoji/59c5fd1b3a3cac4f2fc6
[18:36:43] <daidoji1> google says to delete my mongod.lock??
[18:38:48] <deathanchor> google doesn't say that
[18:38:57] <daidoji1> is there a problem using the date constructor in a query like that?
[18:39:01] <daidoji1> deathanchor: yeah it does
[18:39:16] <daidoji1> http://stackoverflow.com/questions/7958228/mongodb-running-but-cant-connect-using-shell
[18:39:23] <daidoji1> but I didn't do it
[18:39:27] <daidoji1> cause that seems incorrect
[18:40:19] <daidoji1> deathanchor: but do you know what the issue is?
[18:42:32] <deathanchor> from the error you never got a connection successfully
[18:42:53] <deathanchor> perhaps you are running different version of mongod and mongo?
[18:42:55] <daidoji1> deathanchor: no I got an error
[18:43:27] <daidoji1> deathanchor: oh yeah probably
[18:43:40] <daidoji1> mongod is 2.6 and my mongo-client is probably 3
[18:43:45] <daidoji1> but most queries work
[18:43:54] <daidoji1> just not that particular explain or mongoexport for some reason
[18:43:55] <deathanchor> keyword... most
[18:44:12] <daidoji1> deathanchor: why is that?
[18:44:19] <deathanchor> you can use older client with newer server, but not vice-versa
[18:44:44] <daidoji1> isn't it usually the other way around?
[18:45:13] <deathanchor> the server is backward compatiable to older clients, but clients aren't backwards compatible to older servers
[18:45:31] <deathanchor> I believe it is documented somewhere that way
[18:46:26] <daidoji1> deathanchor: okay I'll take your word on it but it seems weird they would make that decision
[18:47:29] <deathanchor> it makes it safe to upgrade your servers then your clients
[18:48:42] <daidoji1> and thats why its weird. There's nothing I want to do more with an enterprise grade database server full of my business's most important information than upgrade it all the time.
[18:48:49] <daidoji1> nobody's ever had a problem doing that before
[18:49:04] <daidoji1> whereas clients are lightweight and can be replaced easily in the typical scenario
[18:51:48] <daidoji1> deathanchor: anyways thanks for your help, that probably is the issue come to think of it
[18:52:01] <deathanchor> I'm not sure, but if I have issues, I usually verify that I'm using like for like versions server/client.
[18:52:23] <deathanchor> if I can replicate it with that, then I know my setup or command is wrong.
[18:52:39] <deathanchor> daidoji1: also it is quiet here during the summer months.
[18:52:41] <deathanchor> on fridays
[18:52:59] <daidoji1> yeye
[18:53:14] <daidoji1> deathanchor: oh do you work for Mongo? Down in the Palo Alto?
[18:53:38] <daidoji1> or did you mean this channel?
[18:53:51] <deathanchor> this channel. I don't work for mongo
[18:54:28] <daidoji1> ahhh yeye
[18:54:35] <daidoji1> Fox News eh?
[18:54:39] <daidoji1> what are they using Mongo for?
[18:55:19] <deathanchor> statistical and tracking info.
[18:55:59] <daidoji1> hh
[18:59:45] <deathanchor> for example you are _id : ObjectId("5594c7141fde007c80c6b84f") in our people.ToBrainWash db/collection.
[21:29:26] <rusty78> Hey guys
[21:30:20] <rusty78> If I want to find the top 10 highest numbers in a specific field in a collection of documents is there a built-in query for this?
[21:30:40] <rusty78> like I have 100 documents in a collection all with the field prices, and I want to get the 10 highest prices
[21:59:37] <cheeser> rusty78: sort on that field, descending. limit 10.
[22:00:03] <rusty78> Oh DUH, got it
[22:00:05] <rusty78> Thank you :)
[22:00:41] <cheeser> np
[22:16:13] <rusty78> One more question if anyone is here
[22:16:41] <rusty78> is it a bad idea to index a collection of sessions, since documents get added and removed so quickly?
[22:20:15] <cheeser> to store session state?
[22:29:38] <rusty78> Sorry someone was at the door
[22:29:54] <rusty78> it's actually to store socket connections - basically when a user enters the website they connect to a socket
[22:30:09] <rusty78> I store their socket.id and username as a document on the collection,a nd when they leave I delete it
[22:30:17] <rusty78> However I reference these sockets rather often in the site
[22:30:37] <rusty78> but at the same time they get created/moved very fast, so would indexing cause a large performance drop if I used one?
[22:46:59] <falieson> I want to build my own analytics, reporting, and logging features for my app but I'm trying to decide if I should keep it in the same db? AnalyticsPages AnalyticsUsers AnalyticsEvents AnalyticsLogs or should I just have 1 collection Analytics? and do something like record_type: "pages, users, logs, events,"
[22:48:41] <falieson> i realize there's multiple questions about should I have the app and the analytics share the db, and if i should split parts of the analytics module into their own collections
[23:10:30] <Torkable> working on aggregate query
[23:10:55] <Torkable> I want to get the sum of a field on a list of docs
[23:11:19] <Torkable> and the avg of a field that is in an array of sub docs on each doc
[23:11:27] <Torkable> so I unwind the array
[23:11:43] <Torkable> but this makes it difficult to to do the sum on the other fields
[23:11:57] <Torkable> should I just perform two seperate queries?
[23:41:48] <daidoji1> Torkable: in my experience probably
[23:42:12] <Torkable> k, that’s what I ended up doing
[23:42:17] <daidoji1> falieson: mongo works best when schema is denormalized
[23:42:34] <daidoji1> falieson: however, it depends on how you're querying all that data
[23:42:52] <daidoji1> and what your CRUD footprint looks like on that data
[23:44:10] <falieson> daidojil it being analytics type data is mostly write once and view aggregates