PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 7th of November, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:08:25] <TehGrub> question
[00:08:34] <TehGrub> how to sort after a $project result
[01:25:52] <Cloudflare> Is it necessarily bad to use an associative table in MongoDB to model many-to-many relationships?
[01:26:01] <Cloudflare> I've bee reading up on it and have been finding very mixed things
[08:03:01] <deever> i can't find instruction on how to update a mongodb installation (from 2.6.x to 3.2.x). are there any resources?
[08:38:34] <kali> deever: there are in the release note, but for such a gap, you may need to go through one or more intermediaries
[15:33:12] <rusey> users.update({"_id": user_id}, {$set: {"memorise_data": mem_data}}); does not work when mem_data is an object {"a": {...}} where there is already an object {"a":{...(different)}}
[15:33:20] <rusey> what can i do to make it work
[15:37:41] <rusey> helooo?
[15:39:54] <cheeser> $set will overwrite any value that's already there for the give key.
[15:56:11] <marianogg9> test
[15:56:25] <cornfeedhobo> hello. i am running 2.6 (i know, i know, the plan is to update "soon") and taking a look at repairDatabase() to try and reclaim some disk space. however, reading the docs have me a bit confused. https://docs.mongodb.com/v2.6/reference/command/repairDatabase/#behavior says --repair will fail if not run on the primary, and db.repairDatabase() says that it has the same effect as --repair, but the docs also say the command is now available for
[15:56:26] <cornfeedhobo> secondary and primary.. much confuse.
[15:57:00] <cornfeedhobo> does anyone know what's the truth here and what is the best strategy to minimize downtime and run repairDatabase to reclaim disk space?
[16:42:12] <Swahili> Q: I need to create a basic backend for an app; thinking about mongodb and expressjs; I'm looking at https://expressjs.com/en/guide/database-integration.html#mongo
[16:42:39] <Swahili> but if I run it I get MongoError: failed to connect to server [localhost:27017] on first connect
[16:42:58] <Swahili> Do I need to install mongodb globally ? let's say in osx brew install mongodb
[16:43:14] <Swahili> or any other dependencies I should know about ?
[16:43:15] <Swahili> Thanks
[16:43:57] <cheeser> you need a mongod running on 27017
[16:46:18] <Swahili> cheeser: thanks for looking!
[16:46:36] <Swahili> But this instructions here are enough to have mongodb running ?
[16:46:37] <Swahili> https://expressjs.com/en/guide/database-integration.html#mongo
[16:46:43] <Swahili> npm install mongodb ?
[16:47:24] <Swahili> or do I need to run mongodb in the background, as in install mongo db (in my case I'm in osx, with homebrew, to do brew install mongo & run the command mongod )
[16:47:27] <Swahili> ?
[16:48:06] <Derick> you need both the npm install, as well as the database itself
[16:48:52] <cheeser> the npm install is just the client libraries, i believe. you still need the server running.
[16:49:11] <Swahili> Derick: cheeser ok awesome! thanks for looking : )
[16:49:48] <Swahili> Yeah, it's working now! Thanks
[17:00:59] <StephenLynx> Swahili, I recommend not using express.
[17:01:19] <StephenLynx> it eats up at least 20% of your performance.
[17:01:22] <Swahili> StephenLynx: Koajs ?
[17:01:36] <StephenLynx> I wouldn't use a web framework at all.
[17:01:53] <StephenLynx> they hog resources, add vulnerabilities and increase learning curve.
[17:02:01] <Swahili> Ok
[17:02:15] <cheeser> https://raygun.com/blog/2016/06/node-performance/
[17:03:18] <Derick> that's... significant
[17:03:48] <StephenLynx> yup
[17:04:15] <StephenLynx> I didn't came up to the conclusion web frameworks are cancer on a whim.
[17:12:29] <cheeser> but then, if performance mattered that much you wouldn't be using javascript. :)
[17:27:26] <cornfeedhobo> anyone with repairDatabase experience?
[17:48:23] <blizzow> I've set up a new replica set, foo and have an old sharded cluster, bar. I'm trying to run mongo-connector to pull the data from the sharded cluster to the new replica set. I installed mongo-connector on a router and keep getting:
[17:48:25] <blizzow> OperationFailure: database error: not authorized for query on local.oplog.rs
[18:00:00] <blizzow> So I created a maintenance user that has roles to adminAny and readwriteAny in the admin database on both the replica set and the sharded cluster. Now when I try and run the mongo-connector, it fails during collection dump and says:
[18:01:01] <blizzow> OperationFailed: not authorized on statistics_production to execute command { update: "direct_statistics_by_account", ordered: true, updates: 1000 }
[18:02:01] <blizzow> Is there a reason the mongo-connector needs to update the source collection?
[18:25:10] <whirlibulf> what is there to stop an unauthorized user from calling addShard on one of my db servers?
[18:55:49] <cheeser> whirlibulf: you can define auth and roles to restrict such things.
[19:03:09] <qswz> is it possible to access the connectionstring of a DB after it was connected?
[19:03:58] <qswz> getName() it seems
[19:05:19] <qswz> damn, doesn't work on nodejs
[19:35:00] <marianogg9> hi guys, i'm facing a weird situation with node + mongo in aws (mongo 3.0.2, node 6.x): i've cloned a node + mongo stack to another instance(s) and run the same app against the same db content
[19:36:06] <marianogg9> the same simple db.collection.find(field: true) is taking around 600k ms in the new stack, and less than 100ms in the old one, same app, same db content
[19:37:25] <marianogg9> and any query against that particular collection takes around 600k ms to complete, any of the other collections respond just fine
[19:37:30] <marianogg9> any ideas why?
[19:38:05] <marianogg9> *same query run from local shell instantly retrieves results
[19:38:38] <lukedaniel> hey all - anyone know a way to purge bad connections from the mongoid connection pool?
[19:39:58] <marianogg9> and db instance load reaches 8 or 9 while this queries are being done
[19:44:37] <cornfeedhobo> hello. i am running 2.6 (i know, the plan is to update "soon") and taking a look at repairDatabase() to try and reclaim some disk space. however, reading the docs have me a bit confused. The docs say --repair will fail if not run on the primary, and db.repairDatabase() says that it has the same effect as --repair, but the docs also say the command is now available for secondary and primary.. Does anyone know what's the truth here and what
[19:44:39] <cornfeedhobo> is the best strategy to minimize downtime and run repairDatabase to reclaim disk space?
[19:59:34] <qswz> 2.6? that's like 4 years okd no?
[20:00:16] <qswz> I'd like that my company use 3.2 on prod
[20:00:33] <qswz> is it a decent request? it's availbale everywhere
[20:01:13] <cheeser> cornfeedhobo: is it an option to just do a rolling upgrade by adding new nodes to the replSet and decommissioning the old ones after the sync is complete?
[20:03:35] <qswz> you can nodes with different versions?
[20:03:42] <cheeser> sure
[20:03:59] <cheeser> it's not recommend to run that way long term but that's how rolling upgrades work.
[20:04:57] <qswz> ok, good to know
[20:24:12] <cornfeedhobo> cheeser: with regard to upgrading node version, or running repairDatabase() ?
[20:24:36] <cheeser> que?
[20:25:25] <cornfeedhobo> cheeser: my goal is to reduce the on-disk data size
[20:25:36] <cornfeedhobo> or, attempt to, and see what reduction can be made
[20:31:50] <cornfeedhobo> cheeser: we'll be doing a rolling upgrade in a matter of months, just pricing new boxes out. in the meantime, i'd like to better understand this disk usage issue.
[20:37:02] <cornfeedhobo> we have added new replica members and they use the same amount of space, but the dump is a lot smaller, which made me research repairDatabase(), but the docs are a bit fuzzy
[22:21:10] <takeshix> hi, I'm using mongo-connector to replicated a database to elasticsearch, the main application that uses the application is written with asyncio and uses to motor driver. How can I create a date field (currently using datetime.datetime.now()) that will properly be propagated to elasticsearch with mongo-connector? it always uses the default with millis, but it stores the unix timestamp in seconds. thanks
[22:21:16] <takeshix> in advance!
[23:30:28] <Absolome> I want to aggregate this dataset of tournament entries so that each identical name is grouped together (the easy part), and so that one field is an array representing each unique month that that name entered a tournament in. So if John Smith has entries in august, september, and november, it would output 'months_active': [8, 9, 11]
[23:30:50] <Absolome> anyone around that could help me figure that out? (if the question made any sense)