PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 21st of September, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:54:31] <jayjo> I'm trying to convert all of my epoch timestamps to isodate in my mongdb database... I have a script like this: https://bpaste.net/show/acb2b8b53adf
[00:54:54] <jayjo> Can I just paste these into the mongo shell, or is there another way to launch a javascript script?
[00:55:32] <jayjo> oops there's an error in there... betterDateStr is now epoch... other than that I think it should work
[01:29:12] <jayjo> Something is wrong in this script, because it just finished executing and not a single document has been altered
[01:30:01] <jayjo> any ideas? It did run for a while, though. Just no output and querying the database now is the same
[02:51:32] <j-robert> hi all, I've been at this for a couple of hours now. finally decided I'm going to throw in the towel, and ask for a bit of direction this.
[02:51:41] <j-robert> is there any place that I can find some help?
[02:51:42] <j-robert> :)
[02:54:44] <joannac> j-robert: might help if you explained what you need help with
[02:55:20] <j-robert> joannac: haha, but of course.
[02:55:33] <j-robert> sorry been up since 3 am, and it's 10pm now, lol.
[02:55:41] <j-robert> joannac: http://snippi.com/s/pueip8a
[02:56:21] <j-robert> so, I can't get that ro work. :(
[02:56:34] <j-robert> sorry for my lack of verbose It's been a long da for me.
[02:57:01] <joannac> I'm not reading 50 lines of code for you
[02:57:08] <joannac> which part doesn't work? the connect part?
[02:57:13] <j-robert> yep!
[02:57:22] <joannac> can you connect using the mongo shell?
[02:57:40] <joannac> can you cpnnect using telnet?
[02:57:45] <j-robert> yep.
[02:58:05] <joannac> from the same server?
[02:58:24] <joannac> i.e mongo shell to 192.168.175.128:5000 works
[02:58:30] <j-robert> well, hmm no.
[02:58:41] <joannac> ...
[02:58:56] <joannac> so which s it?
[02:59:10] <j-robert> ok well so I'm able to connect to the server via putty.
[02:59:32] <joannac> putty is a program
[02:59:37] <joannac> which port are you connecting to?
[02:59:44] <joannac> are you using ssh or telnet?
[03:00:09] <j-robert> ssh
[03:00:49] <j-robert> I'm able to get mongo db running via ssh.
[03:01:11] <j-robert> i created 2 databases, and everything.
[03:01:28] <j-robert> but once I try to run my node app it fails on me.
[03:01:32] <joannac> okay
[03:01:46] <joannac> ssh and connecting to the mongodb port are not at all the same thing
[03:02:01] <joannac> from your laptop, or whereever you are running your app from
[03:02:17] <j-robert> ok.
[03:02:21] <joannac> using putty, choose telnet, put in your server name 192.168.175.128 and port 5000
[03:02:25] <joannac> what do you get?
[03:02:36] <j-robert> one sec
[03:03:04] <j-robert> ping?
[03:03:22] <j-robert> do you want me to ping my server?
[03:03:58] <j-robert> I get "?Invalid command"
[03:04:12] <joannac> ...
[03:04:14] <j-robert> when I put my server name in telnet
[03:04:15] <joannac> take a screenshot
[03:04:18] <j-robert> ok
[03:05:01] <j-robert> http://i.imgur.com/88MgEEp.png
[03:05:42] <j-robert> maybe I should run my server first?
[03:05:43] <j-robert> lol.
[03:05:53] <joannac> ...
[03:06:10] <joannac> yes, if your mongod is not running you can't connect to it, can you?
[03:06:17] <joannac> because it doesn't exist
[03:06:53] <j-robert> if I try to run my server (node http.js), mongodb throw's an error and disconnects my server from running.
[03:07:24] <j-robert> rightfully so.
[03:07:32] <joannac> okay
[03:07:36] <joannac> start your mongod
[03:07:49] <j-robert> ok, it's running.
[03:07:52] <joannac> that should not require any node commands so I have no idea what you mean
[03:08:04] <joannac> which server is it running on? 192.168.175.128?
[03:08:30] <j-robert> how can I know?
[03:08:39] <joannac> you don't know where you started the mongod?
[03:09:28] <j-robert> well, it's running from my vmware linux system started via putty.
[03:10:02] <joannac> how do you connect to the vmware linux system?
[03:10:25] <j-robert> with vmware workstation pro
[03:10:39] <j-robert> I'm on windows 7
[03:11:34] <joannac> so the vmware linux system is running on vmware on your laptop?
[03:11:46] <j-robert> desktop, yep.
[03:12:12] <joannac> can you connect to it, for example via ssh?
[03:12:32] <j-robert> I am right now through putty ssh
[03:12:56] <joannac> so what do you type in putty to connect to it?
[03:13:15] <j-robert> http://i.imgur.com/KRuYYDh.png
[03:13:21] <j-robert> yep.
[03:13:45] <joannac> see how it says jr@192.168.etc?
[03:13:50] <joannac> that is how you connect to it
[03:13:55] <j-robert> ahhh, ok.
[03:14:12] <j-robert> sorry for my noobness, lol.
[03:14:13] <joannac> sorry, you seem to have a basic lack of understanding about how everything works
[03:14:27] <j-robert> yeah, i totally agree.
[03:14:28] <j-robert> T_T
[03:14:35] <joannac> and I don't have time right now to help you through this
[03:14:47] <j-robert> ok, that's understood.
[03:15:06] <j-robert> thanks anyway!
[06:04:11] <laroi> Hey, anybody here had painful experience with findOneAndUpdate method in mongoose?
[06:04:54] <laroi> even if the document exits in the collection, its going for creating new document, there by causing duplication in database
[07:29:29] <fatninja> hey guys
[07:29:52] <fatninja> let's take this example: I have users and posts collections. Posts document have a "createdAt"
[07:30:11] <fatninja> is there a way to retrieve 5 latest posts for each user in one query ? :D
[07:30:37] <fatninja> meanwhile, I'm studying $group and $aggregate and see if there's a way, but if someone can point me into the right dirrection it would be great.
[07:54:14] <fatninja> question now is, can I group data and apply sorting not for the aggregation results but actually for the elements I'm planning to group
[08:58:45] <fatninja> solution found to the q above:
[08:58:46] <fatninja> http://pastebin.com/QVXFNXKE
[09:02:01] <laroi> (y)
[09:27:31] <fatninja> solution found not working, it was a coincidence
[09:27:40] <fatninja> http://pastebin.com/8UD8uYKb - i want to take the last 5 posts for each ownerId in a single query
[09:28:44] <fatninja> ^ better paste: http://pastebin.com/BiLaN7Zd
[09:35:08] <jokke> hello
[09:36:08] <jokke> is it possible to "floor" an ISODate to a specific portion like minute, hour, day, etc. in an aggregation stage?
[09:37:53] <jokke> i'm trying to aggregate some samples to a lower resolution. I'd need this functionality to make a proper group stage. I know i can extract the individual components of the date and that would let me group the documents. The resulting documents still should have the date field as an ISODate
[09:38:28] <jokke> i'm not sure how/if this is possible
[09:39:30] <jokke> oh wow
[09:39:36] <jokke> i just found the answer
[09:39:38] <jokke> http://stackoverflow.com/questions/22698265/in-mongodb-project-clause-how-can-i-convert-date-from-milliseconds-to-isodate#27828951
[09:39:45] <jokke> quite a hack but fine :D
[11:36:11] <jayjo> I have this script: https://bpaste.net/show/acb2b8b53adf that I ran trying to adjust my epoch time stamp (a number) into an ISODate() in the database. It didn't work... it ran for a while but nothing happened. Any idea what else I need to do?
[11:48:21] <jayjo> If I'm using initializeUnorderedBulkOp(), how do I reference the object itself in the $set? Is that even possible?
[11:51:09] <jayjo> I'm looking at this example in the docs: https://docs.mongodb.com/manual/reference/method/Bulk.find.update/#example
[11:51:21] <jayjo> Do I do a .foreach() on the bulk.find()?
[11:51:33] <jayjo> bulk.find().foreach() ?
[12:23:10] <jayjo> I asked a SO question here about the same question in more detail : http://stackoverflow.com/questions/39616232/large-bulk-update-mongodb-foreach-document
[12:24:59] <StephenLynx> tl,dr;
[12:25:13] <StephenLynx> whats the use case?
[12:33:36] <jayjo> StephenLynx: sorry, I had referenced the question before you got here... I'm trying to figure out how to reference the document when doing an initializeUnorderedBulkOperation()
[12:34:32] <jayjo> now I have var bulk = db.siteEvents.initializeUnorderedBulkOp(); bulk.find( { "_t": { $type : 1 } } ).update( { $set: { <UNSURE HOW TO REFERENCE DOC HERE> } } ); bulk.execute();
[12:34:59] <jayjo> do I need to do a .forEach() after the find and before the update?
[12:35:03] <StephenLynx> no, no
[12:35:06] <StephenLynx> what is the use case?
[12:35:11] <StephenLynx> thats the solution you want to use.
[12:35:22] <StephenLynx> not the problem.
[12:35:43] <jayjo> Oh, I'm trying to take a number, which is seconds from the epoch and stored as a number... and convert it to an ISODate() in the database
[12:36:07] <jayjo> currently I have 110M documents
[12:36:23] <StephenLynx> hm
[12:37:05] <StephenLynx> what I would do is
[12:37:18] <StephenLynx> to not use the shell directly.
[12:37:27] <StephenLynx> and use an asynchronous recursion to iterate them
[12:37:36] <StephenLynx> without risking blowing up the timeout or RAM
[12:38:37] <StephenLynx> cursor timeout*
[12:38:46] <StephenLynx> this is a one time thing, right?
[12:38:54] <jayjo> yes it is
[12:39:03] <StephenLynx> so you are ok if it takes a little longer.
[12:39:11] <StephenLynx> did you get what I mentioned by asynchronous recursion?
[12:39:55] <jayjo> yea if it takes a long time it is not a big deal to me. I understand the concept, but do you mean just write a standalone program in whatever language to do this?
[12:40:02] <StephenLynx> yes.
[12:40:17] <StephenLynx> in my system, I have a db version system.
[12:40:45] <StephenLynx> where the system checks on boot if the db version changed and runs a migration function for each version it has to upgrade to
[12:43:30] <StephenLynx> so if the db is inversion 5 and the new version of the software is 7, it will run the functions to upgrade from 5 to 6 and then from 6 to 7
[12:46:05] <zylo4747> I am receiving the following error in the log file and it crashed the MongoDB service (2.6.x) - Fatal DBException in logOp(): 16328 document is larger than capped size
[12:46:13] <zylo4747> how do i troubleshoot this?
[12:46:31] <StephenLynx> its already troubleshooted.
[12:46:39] <StephenLynx> the document is larger than capped size.
[12:47:00] <StephenLynx> if your limit is 16mb, you don't try to store a 16mb document.
[12:47:02] <zylo4747> so there's a capped collection with a max size and they tried to put a single document greater than that size into it?
[12:47:14] <StephenLynx> no.
[12:47:32] <zylo4747> how do i find my capped size?
[12:47:45] <StephenLynx> how large is your document?
[12:48:04] <zylo4747> sorry the end got cut off says 12868248 > 5242880
[12:48:18] <zylo4747> so i guess a ~12 MB document and the capped size is 5 MB?
[12:48:54] <StephenLynx> hm
[12:48:55] <StephenLynx> weird.
[12:49:04] <zylo4747> I see in the config there's an oplogsizemb setting with 5
[12:49:12] <zylo4747> under replication
[12:49:15] <zylo4747> i guess that's the limit?
[12:49:20] <StephenLynx> dunno.
[12:49:25] <StephenLynx> I don't have experience with that.
[12:49:39] <zylo4747> ok
[12:49:44] <zylo4747> well i think you got me in the right direction
[12:49:46] <zylo4747> thanks
[12:51:31] <zylo4747> I guess I need to change that size but it says online once it's set it can't be changed through that setting...
[12:52:36] <StephenLynx> you could upgrade from 2.6
[12:52:40] <StephenLynx> thats quite old.
[12:53:26] <zylo4747> yeah we have it on the project list but haven't quite gotten there yet
[13:02:25] <warlock> I have been trying to wrap my head around this for hours and still haven't figured it out. I would really like if someone could help me with this. I'm trying to aggregate two collections
[13:02:46] <warlock> basically I have "companies" and "subscribers" - so, people can subscribe to companies and a company can have multiple subscribers to it
[13:03:12] <warlock> https://paste.mrfriday.com/awelubowas.coffee <- how my documents look like in each collection. How can I get all subscribers for all companies?
[13:03:42] <warlock> basically want to select all companies and list each subscriber in the result (meaning, _id and email)
[13:06:33] <warlock> the goal is to get all subscribers for each company that I can later loop through.
[13:06:46] <StephenLynx> eh
[13:06:55] <StephenLynx> you can't join with monho.
[13:06:58] <StephenLynx> mongo*
[13:07:00] <StephenLynx> I mean
[13:07:06] <StephenLynx> there is a new weird feature that I think it does that.
[13:07:10] <StephenLynx> but I heard its slow.
[13:07:13] <warlock> it's called aggregate afaik.
[13:07:17] <StephenLynx> nope.
[13:07:21] <StephenLynx> nothing to do with that.
[13:07:36] <StephenLynx> aggregation allows you to manipulate data in various ways before giving it back to you.
[13:07:41] <StephenLynx> a join is a completely different deal.
[13:08:28] <warlock> well, how would the document structure look like in my case? I could probably put all the subscribers in the companies document
[13:09:05] <warlock> but that feels ridiculous and there must be a better way of dealin with this; referencing between collections should be do-able
[13:09:21] <warlock> I see all the "movie" <-> "actors" examples or "book" and "authors"
[13:09:42] <StephenLynx> the structure is fine.
[13:09:45] <warlock> none of them make any sense and I'm tired of seeing all the same, nonsense examples ::)
[13:10:03] <StephenLynx> and no, mongo is not designed to reference collections.
[13:10:16] <StephenLynx> that's what relational databases are for.
[13:10:18] <warlock> ok StephenLynx, what is your recommendation on this? I think I am simply thinking wrong
[13:10:39] <StephenLynx> you could not get all client's companies at the same time.
[13:10:48] <StephenLynx> most of the times it boils down to how you query the data.
[13:11:07] <StephenLynx> get ONE company, get its clients, do a join on application code.
[13:11:25] <warlock> gotcha, then I am thinking wrong after all.
[13:11:39] <StephenLynx> then when you have to get another company, do it again.
[13:11:52] <warlock> just to verify, the structure I have, I suspect it would be fully possible to do a count on subscribers for all companies? eg. top 10 list of companies with most subscribers
[13:12:00] <warlock> this should be do-able with aggregate, perhaps?
[13:12:00] <StephenLynx> yes.
[13:12:31] <warlock> perfect, then I'm following slowly.
[13:12:36] <StephenLynx> I think you can get a collection's length
[13:12:38] <StephenLynx> let me check it
[13:12:43] <warlock> yeah, you can
[13:12:51] <warlock> (reminds me that I tried that before and it worked fine)
[13:13:16] <StephenLynx> the problem is
[13:13:22] <StephenLynx> that can't be indexed, I guess.
[13:13:34] <StephenLynx> so if you have too many companies, it would take a while to sort it.
[13:13:39] <StephenLynx> and might even blow up available RAM.
[13:13:52] <warlock> could sort it systematically, or on the front-end though, so not a big deal.
[13:14:26] <StephenLynx> or
[13:14:30] <StephenLynx> do a pre-aggregation of that.
[13:14:33] <StephenLynx> and index that field.
[13:14:45] <warlock> gotcha.
[13:15:07] <warlock> just to get me completely on the right track as I am still not completely following...
[13:15:17] <warlock> so, the structure I have where I have put all my subscribers in an array.
[13:15:38] <warlock> say that I have the company name (name), how would I go on with getting _all_ subscribers and its information for that given company?
[13:16:25] <StephenLynx> users.find({_id:{$in:{the company's array of users that you queried before}}})
[13:20:58] <warlock> StephenLynx: Ok, now I got it.
[13:21:01] <warlock> db.subscribers.find( { _id: { $in: [ ObjectId("57dbe1a43901efa873fda907")] } } )
[13:21:06] <warlock> would do it for me
[13:21:09] <StephenLynx> aye
[13:21:30] <warlock> fantastic, it's just the mindset I had to get into. The simple explanation that I had to do two queries made much sense
[13:21:39] <warlock> tried to figure it all out in one query which simply complicated things way too much.
[13:21:42] <warlock> Thanks
[13:32:02] <jayjo> So I put a console.log() statement in this script and nothing is logging to the console. I'm just trying to investigate what's happening here... why won't I see any output? https://bpaste.net/show/25f64ca5b121
[13:32:23] <jayjo> I thought I would see a statement logged for each element in the database, because of the foreach()
[13:42:34] <StephenLynx> if I had to take a guess
[13:42:41] <StephenLynx> it isn't finding any documents.
[13:42:44] <StephenLynx> do a count
[13:43:00] <StephenLynx> besides
[13:43:04] <StephenLynx> your code is completely broken.
[13:43:34] <StephenLynx> the bulkwrite will run without anything on it because it is being ran on the same cycle of the find.
[13:44:11] <jayjo> oh. well maybe that's what's going on then. How do I only write the bulkwrite once the find is completed?
[13:44:12] <StephenLynx> while the foreach documents will be returned on later cycles.
[13:44:24] <StephenLynx> instead of foreach, use a toarray
[13:44:32] <StephenLynx> and iterate on the array.
[13:44:50] <StephenLynx> and no, thats a different issue.
[13:45:04] <StephenLynx> unless
[13:45:20] <StephenLynx> you are getting an exception on the bulkwrite because the array its empty
[13:45:51] <StephenLynx> in which case everything will die before foreach even has a chance to run.
[13:46:14] <jayjo> well it takes a while for the script to finish running, there is just no output and it doesn't alter any of the data
[13:52:33] <pyios> what is the best solution to do mongodb shard?
[13:52:54] <cheeser> que?
[13:55:31] <StephenLynx> mongos pyios
[13:55:45] <pyios> ?
[13:55:51] <pyios> what?
[13:56:27] <StephenLynx> https://docs.mongodb.com/manual/reference/program/mongos/
[13:58:50] <pyios> this is just only a command line ,but it not give me ‘how’
[13:59:03] <StephenLynx> ffs
[13:59:05] <StephenLynx> rtfm
[14:02:47] <jayjo> maybe having so many writes at onces (110M) is why this doenst work, so I tried to push the writes through after 999 iterations, but I'm still not getting the print statement to even show anything
[14:03:25] <StephenLynx> show me the new code.
[14:05:19] <jayjo> https://bpaste.net/show/335a63480753
[14:06:22] <StephenLynx> still utterly broken
[14:06:33] <StephenLynx> it will keep trying to run the bulk after it hits 1k
[14:07:12] <StephenLynx> actually, you are emptying the array.
[14:07:29] <StephenLynx> is it printing at least?
[14:07:34] <jayjo> no, nothing
[14:07:45] <StephenLynx> did you do that count I told you to?;
[14:08:18] <jayjo> just db.siteEvents.count()?
[14:09:10] <StephenLynx> no
[14:09:16] <StephenLynx> use the query you are using there.
[14:09:24] <StephenLynx> to see if the query is returning anything to begin with.
[14:09:50] <jayjo> just drop the forEach() and replace with count()? and print outside of that?
[14:41:07] <StephenLynx> rtfm
[14:52:32] <pyios> how do I get the slow query history?
[14:56:03] <cheeser> https://docs.mongodb.com/manual/tutorial/manage-the-database-profiler/
[15:08:26] <spleen> Hello All
[15:09:26] <spleen> How could i execute this 2 mongo command throw command line : use db ; db.dropDatabase () ?
[15:10:02] <cheeser> come again?
[15:10:05] <spleen> i tried ===> echo 'use uniques ; db.dropDatabase()' | mongo, but doesnt work
[15:10:15] <StephenLynx> remove the '
[15:10:40] <cheeser> echo 'db.dropDatabase()' | mongo uniques
[15:18:59] <spleen> cheeser, thank you ! Could you give me an example with --eval option use ?
[15:19:57] <cheeser> a what now?
[15:20:08] <StephenLynx> kek
[15:55:21] <spleen> cheeser, i just would like to have example using --eval arg
[15:55:49] <cheeser> an example of?
[15:58:03] <chris613> silly question, but when adding auth to a mongo URI it's mongod://user:pass@1.2.3.4:27017,9.8.7.6:27017/ I don't need to add the user:pass@ to each replica's address, right?
[16:30:00] <zylo4747> do you know if it's necessary to insert the last oplog record if you resize the oplog using these instructions or can i just skip that part? https://docs.mongodb.com/v2.6/tutorial/change-oplog-size/
[17:46:24] <kuku1g> hi, does someone know how to change the log level of the mongodb java driver? i've followed several stackoverflow answers and the like but nothing seem to work for me. Every insert creates 2 log outputs (debug level). I wan't it to stop print out these messages:
[17:46:30] <kuku1g> https://ybin.me/p/31ea4f7a63483a86#RRJ4Y4JR3YxZWtidOGszopF+6vmVHW49NVxKzHGycbg=
[18:11:19] <lauriereeves> The mongodb-org-server rpm for Redhat 7 crashes Anaconda because package calls groupadd in it's %pre scriptlet which fails if groupadd hasn't been installed yet.
[18:11:30] <lauriereeves> A simple workaround be to add "Requires(pre): /usr/sbin/groupadd" to the spec file for that rpm
[18:11:46] <lauriereeves> How could I get that change made?
[18:32:59] <Doyle> Hey. These entries show up in a config server log. Rsync the /data dir from one of the others? [conn54] Assertion: 17322:write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 144 storageSize:5242880 @ 28575 and STORAGE [conn69] couldn't make room for record len: 156 in capped ns local.oplog.$main
[19:03:24] <n1colas> Hello
[19:05:38] <Doyle> I'm seeing this behaviour in the few seconds the service starts before it dies.
[19:06:19] <Doyle> db.oplog.$main.count() shows a number, as expected, but find() returns nothing, and findOne() issues 'null'
[19:06:31] <Doyle> hi n1colas
[19:07:42] <n1colas> hi Doyle
[19:37:55] <AlmightyOatmeal> if i want to search for documents that have a field with a list of acceptable values, would i use something like: {"field": {$in: ["val1", "val2", "val3"]}} or is there a better way?
[19:42:35] <Doyle> AlmightyOatmeal, looks good
[19:42:53] <Doyle> Anyone have any ideas about these logs on a config server? [conn54] Assertion: 17322:write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 144 storageSize:5242880 @ 28575 and STORAGE [conn69] couldn't make room for record len: 156 in capped ns local.oplog.$main
[19:43:28] <Doyle> The mongod service starts for a few seconds. In that time db.oplog.$main.count() shows a number, as expected, but find() returns nothing, and findOne() issues 'null'
[19:52:55] <speedio> i have two schemes, user and roles.. when i create a reference to the roles scheme, do i need to create a new document for each reference, or can i just have two documents(client,admin) in roles scheme and then reference to their id?
[19:57:51] <AlmightyOatmeal> Doyle: thanks :)
[20:00:05] <synthmeat> speedio: what does a "role" contain? you could possibly consider just setting the role directly in user, with no reference to another document
[20:02:40] <nbettenh> We've gone from 3 config servers to 1, the other two are reported as being out of sync. What is the method for getting all three back in sync?
[20:02:47] <speedio> synthmeat: i have role directly in user now, but i need to have it separated, so i can create roles new roles with description..
[20:03:52] <synthmeat> speedio: yeah, you don't need to have unique role per role referenced in user
[20:07:36] <Doyle> nbettenh, ew. Get on them and do use config, db.runCommand("dbhash"). Compare the three results. Hopefully two match.
[20:07:50] <speedio> synthmeat: ok, im not sure how to reference to it when i create a user, so now i have this in my schema(mongoose) "roles: [{ type: Schema.Types.ObjectId, ref: 'Roles' }]" , can i just insert the object id into the roles value when creating a user? and then do a populate query or what its called?
[20:08:17] <Doyle> nbettenh, before doing anything beyond that, backup your dbpath from all 3 config servers.
[20:10:05] <Doyle> Then look at this doc. https://docs.mongodb.com/v3.0/tutorial/replace-config-server/ (for whatever version you're on.)
[20:10:57] <synthmeat> speedio: to be perfectly honest, i regret using mongoose in the first place, and i'd just use it as [{type: String, _id: false}]
[20:13:49] <speedio> synthmeat: ok, so then i can reference by string instead of id?
[20:14:08] <synthmeat> yes
[20:16:10] <synthmeat> speedio: possibly you'd still want [{type:ObjectId, _id: false}], if you don't want to end up needing to convert ObjectId from roles to .toString() so comparison doesn't fail with string
[21:17:14] <surge_> If I ctrl+c a compound index build, what is the behavior of mongo supposed to be?
[21:17:27] <surge_> I see the compound index when i run `getIndexes()` but I’m not sure if it ran to completion or not.
[21:17:41] <surge_> I did a `currentOp()` and saw that it got to at least 44% but didn’t check on it until later and then I didn’t see the process anymore, so i’m not sure if it finished or was finally killed.
[21:35:12] <cheeser> i believe the index build will continue. ctrl-c just kills the shell which is a separate process that's simply waiting on a response fromthe server
[21:40:40] <surge_> cheeser: okay thanks
[21:55:58] <Auz> does anyone have experience migrating replica sets up from very old versions (2.2) to the 3 branch? The docs say that it should be compatible and also to upgrade to 2.4, then 2.6 then 3.0 then 3.2...
[21:58:47] <cheeser> Auz: that's correct.
[22:00:38] <ankurk> How can I check the number of files opened by Mongo while indexing?
[22:03:31] <Auz> cheeser: which is correct, that its compatible or that I have to upgrade via the other versions?
[22:04:59] <Auz> if I join a 3.2 mongo node to a replica set running older versions, what is the failure condition? Will the new secondary give errors? Will it break anything else?
[22:30:29] <cheeser> Auz: you could probably add a new replica set member at 3.2 and just do a rolling upgrade.
[22:30:49] <cheeser> once that new node is done sync'ing, decommission one of the old ones
[22:30:55] <cheeser> repeat until you're all done.
[22:33:40] <cheeser> you have some options for speeding that up
[22:34:18] <cheeser> of course, you could upgrade a secondary through the different versions. that'd dramatically your down time.