[00:32:13] <oky> kurushiyama: thanks for the investigation!
[02:52:17] <zsoc> Hey so.. there's some magical method that lets me access document._doc._id as document._id ?
[04:24:14] <zsoc> Mongoose will typecast right? Like if I give '80.00' to a "Number" field?
[06:02:37] <zsoc> ok I have 11000 based on a field I have listed as a duplicate in ANOTHER COLLECTION. How does that work? The field has the same name in this collection but it's not modeled as unique in this one
[06:05:07] <zsoc> I have a collection called Customer where 'Email' is a dupe field. I also have a collect called Order. I am setting Email: c.Email when i am creating my order, because I have my customer doc already found at that point. When I go to create that Order it throws an 11000 because the Email field is duplicate. In other words it's telling me I can only have 1 "order" per "customer"
[06:05:16] <zsoc> 11000 error code, it's the err code for duplicate entry
[06:07:04] <zsoc> I mean.. mongo isn't relational by nature.. why would it see that the field in a different collection is set unique?
[06:07:22] <zsoc> When I set Email as c.Email (c being the customer document) is it setting a reference instead of the raw data?
[06:07:46] <zsoc> do i need to use c._doc.Email for just the value?
[06:08:31] <zsoc> holy hell... it's setting them all as references isn't it
[07:36:48] <mroman> We have two systems (test/production) where we insert files into hdfs and create an index for the files in a mongodb.
[07:37:11] <mroman> Insertion on the production system is 10x slower so we thought it's due to the hdfs and we spent days debugging why hdfs is so slow.
[07:37:25] <mroman> but it turned out that it's the mongodb that is horribly slow on the production system.
[07:39:06] <mroman> (we basically hold a log of files inserted into the mongo)
[07:39:36] <mroman> so creating 100 files on the hdfs without logging those to the mongodb is 2.5s, with logging to the mongodb it's 48s
[07:40:04] <mroman> which means that a single log insert to our db on the production system takes about 0.4s
[07:41:36] <mroman> how do I reasonably time this?
[07:52:13] <mroman> hm profiling shows that a find takes about 100 to 150ms
[07:56:42] <mroman> yeah we're doing a find to check if an entry in the log already exists, then we insert an entry, and after the file is completed we update the entry
[07:56:58] <mroman> where each find takes about 150ms so that's 300ms meaning to insert 100 files we're spending 30s on just doing find queries
[08:07:33] <mroman> Can you do hashed indexes on compound keys?
[09:32:34] <arussel> is there a way to deep clone an object inside the mongo shell ?
[09:53:03] <Keksike> Hey. I need to log all mongodb operations of a specific DB to a log file. Could someone walk me through this? I tried reading the https://docs.mongodb.com/v3.0/reference/log-messages/ but tbh I didnt quite understand it all.
[09:53:26] <Keksike> so I set the db logLevel to 1
[14:03:59] <mroman> there are also prebuilt binaries available, if nothing else works
[14:05:23] <AxD79> what do u mean by prebuilt biaries?
[14:07:53] <mroman> exactly that :). If there are no packages for your distro, or they don't work for your distro you can download the binaries and install them without a package manager.
[14:21:27] <mroman> no, the binary install doesn't use any repository/package manager
[14:21:44] <AxD79> so maybe in my situation that might better?
[14:22:50] <mroman> Well I personally would probably try https://repo.mongodb.org/zypper/suse/12/mongodb-org/3.2/x86_64/ first but I don't really know suse :D
[14:23:05] <mroman> I also have no idea what tumbleweed is :)
[14:23:45] <mroman> but installing by binary should generally work pretty easily.
[14:24:05] <mroman> but you will have to setup init scripts etc. yourself if you want mongo to start at linux startup
[14:24:13] <mroman> or whatever your distro uses to start daemons on startup
[14:25:22] <mroman> (and distros will usually ship a default config to get things going and create folders for the mongo logs and db which you have to setup manually when installing the binaries from the tarball)
[14:25:30] <mroman> (but creating folders isn't really hard so ... )
[21:01:43] <JWpapi> Is it worth to use mongoose? I dont find the standard methods to complicated and you are not that bound to the schemas
[21:07:52] <kurushiyama> JWpapi: Avoid, in that case
[21:08:42] <JWpapi> oh ok. But JS is by far my best language, so ill probably stick with it
[21:09:31] <kurushiyama> JWpapi: Do not get me wrong: Everybody his, it is just a matter of personal requirements. I use Go, which is probably diametrical to node ;)
[21:12:21] <JWpapi> man you really kill me. I have to google diametrical now
[21:17:42] <kurushiyama> JWpapi: The opposing sides of one possibility.
[21:19:01] <varunkvv> Hi, is there a way to restrict certain operations on mongo. I'd like to restrict accidental dropping or deletion of data somehow.
[21:23:20] <kurushiyama> varunkvv: Yes. In your application.
[21:24:19] <kurushiyama> varunkvv: Of course, you can use Client Authentication
[21:25:13] <kurushiyama> varunkvv: But even the most basic access allows dropping documents, iirc.
[21:26:50] <kurushiyama> varunkvv: Nope, it is more granular than I thought. Have a look at https://docs.mongodb.com/v3.0/reference/system-roles-collection/#a-user-defined-role-specifies-privileges
[21:27:42] <kurushiyama> varunkvv: So you can define roles that lack the "remove" permissions.
[21:32:53] <varunkvv> beautiful kurushiyama, will read more into that.
[21:33:58] <kurushiyama> varunkvv: Just remember that you need to enable auth and still need an admin ;)
[23:03:51] <fartface> Don't suppose this is the place to ask about Mongoose related q's, yeah?
[23:04:57] <Boomtime> you can ask all you like, but it's true the answers are less likely to be good for you
[23:05:30] <fartface> Basically I used MongoDB with Meteor, but I'm not huge on the direction of Meteor lately, and I'm trying to migrate my app from Meteor to Node, and so in doing that have had to migrate from meteor's mongo implementation to mongoose.
[23:06:18] <fartface> I'm trying to follow http://mongoosejs.com/docs/ <- that, but my object never gets the values.
[23:06:40] <fartface> In retrospect, this is really so much more of a rubber duck question than an actual question so I probably shouldn't waste anyone's time in here lol
[23:07:31] <Boomtime> well, you'll certainly do better if you can show what you've tried and what outcome you get - use a gist/pastebin etc, to show your code and what happens as a result, including mongo shell find() outputs etc
[23:12:47] <fartface> Boomtime: For brevity, https://gist.github.com/anonymous/ead6bfa74f371ed7657164330a8ddf6e
[23:18:06] <Boomtime> also, you don't do much logging - if you have a problem, how will you know where it happens?
[23:19:39] <fartface> I've removed most of my debugging commented out stuff
[23:19:55] <fartface> Basically if I log out anything from newAlert's properties, they come back undefined
[23:20:16] <fartface> It's like they're not set in the Watchdog({}) part of the code.
[23:20:28] <fartface> But they _are_ output properly from the serverAlert part of the code.
[23:21:03] <fartface> So I understand why the save is failing--like, if the values show up as undefined, then it's not going to save a bunk record.
[23:24:39] <jiffe> we've been having problems resyncing machines so we've been rsyncing instead but the machine is sitting around 95% disk usage because of orphan documents after adding new shards so I need to get mongodb resyncing working
[23:25:26] <jiffe> right now I'm hitting $err: "BSONObj size: -709431389 (0xD5B6EFA3) is invalid. Size must be between 0 and 16793600(16MB)
[23:25:37] <jiffe> and then replication starts over
[23:27:53] <Boomtime> on which host do you see that error, the source of the destination for initial sync?
[23:28:56] <Boomtime> the error is extremely unlikely to be a fault at the destination, more likely the issue occurs on the source and the destination is merely a victim
[23:29:11] <Boomtime> how many members do you have in the replica-set?
[23:47:42] <jiffe> currently taking up 27TB of 28TB available on these machines
[23:48:06] <jiffe> the other replica set is arounf 17TB so that is probably where this replica set would be without all the orphan documents
[23:48:12] <Boomtime> write a shell script that enumerates the _id index - it has a smaller chance of being damanged - for each document, write to a file - some will fail, write their _id to another file
[23:48:42] <Boomtime> with the set of damaged _id that the first member couldn't read, dump just that set from another member
[23:54:19] <jiffe> so you're thinking I still need to find a way to dump this replica set's collection and start it over?
[23:54:59] <jiffe> so if I run through the list and I find that the number of _id's that I can't read is a small amount, can just those be cleaned up?
[23:55:27] <jiffe> or at least ignored in some manner during this member's resync
[23:56:10] <Boomtime> getting a secondary to ignore certain data from the primary would literally break the purpose of "sync"
[23:57:14] <jiffe> well, since it is breaking at corrupt points, if there's records it can read that should be fine
[23:57:31] <jiffe> I'm not sure if I have the space to extract this
[23:57:58] <Boomtime> eh? then how can you possibly sync?
[23:59:13] <Boomtime> there is always the --repair start-up option - but you don't know precisley what it will produce, it uses a scorched earth policy to just "make it work at all costs"
[23:59:13] <jiffe> will the extracted data be the same size as what existing database? (no plaintext expansion?)
[23:59:34] <jiffe> plus --repair requires the space to rewrite the database no?