PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 16th of August, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:38:00] <Ankhers> Does Mongo support returning the object on update?
[01:28:49] <virtualsex> bin/mongos --configdb 192.168.0.168:27019,192.168.0.154:27019,192.168.0.134:27019 --logpath /data/logs/mongos.log –fork
[01:28:54] <virtualsex> this command doesn't work
[01:29:08] <virtualsex> too many positional options have been specified on the command line
[01:34:41] <virtualsex> help?
[01:51:47] <virtualsex> why mongos need repl set name?
[01:51:58] <virtualsex> one mongos for one set?
[01:52:05] <virtualsex> doesn't make sense
[01:53:29] <cheeser> isn't --fork?
[03:18:40] <virtualsex> ERROR: child process failed, exited with error number 5
[03:18:44] <virtualsex> what's that mean
[03:18:56] <joannac> check the logs?
[03:21:37] <virtualsex> Error initializing sharding system: ConfigServersInconsistent: hash from 192.168.0.168:27019: {} vs hash from 192.168.0.154:27019: { chunks: "d41d8cd98f00b204e9800998ecf8427e", databases: "f546aa21b34ae5d59f9a10fb0f450ecf", shards: "e684d87735e66e9c79fbb7692eeb92e3", version: "4b5568f6a724d1b6b10d3477c4cf7f28" }
[03:22:08] <joannac> there you go. your config servers are inconsistent
[03:22:15] <virtualsex> what that mean
[03:22:19] <virtualsex> how to fix it
[03:22:26] <joannac> make them consistent
[03:22:30] <virtualsex> how
[03:22:31] <joannac> looks like one is empty
[03:22:54] <joannac> backup and restore from the non-empty one to the empty one?
[03:23:29] <virtualsex> it's all new installation
[03:23:46] <joannac> clearly it's not if one of them has data, and the others don't
[03:28:22] <cheeser> a new primary would always have data because of the admin dbs and oplogs and such
[03:28:46] <cheeser> oh, but this is the config server. might still apply under CSRS.
[03:30:43] <virtualsex> success
[03:31:02] <virtualsex> now time to do some sql commands
[03:31:53] <cheeser> sql?
[03:32:02] <cheeser> good luck with that one. :D
[05:12:24] <virtualsex> { "ok" : 0, "errmsg" : "no such cmd: replSetInitiate", "code" : 59 }
[06:04:06] <wwwi> hello
[06:04:44] <wwwi> if you already have a db, how difficult is to add new fields and data and don't break the app?
[06:29:50] <KekSi> wwwi: that depends on your app and your app's deserializer
[06:30:38] <wwwi> KekSi: the app deserializer? what is this?
[06:31:47] <KekSi> the bit that makes an object you can use in your language out of the json returned from db
[06:32:14] <KekSi> if you're doing that manually and discard fields you don't know about you're good to go
[06:37:07] <wwwi> KekSi: ok
[06:39:45] <KekSi> if you're not sure just try it
[06:41:04] <KekSi> like db.someCollection.findOne()['myNewField']=10 and see if you can still do something with it
[07:39:19] <headphase> i have a flask application, using flask_mongoengine for the db. my db is hosted at mLab. why is it that when i try to query the db, i get the error: "pymongo.errors.OperationFailure: database error: not authorized for query on heroku_dptwtq1j.web_config", even though i am logged in as the admin user?
[07:39:24] <headphase> on another application, im also able to query the db just fine, except that application is not using flask_mongoengine, it's using motor.motor_asyncio. for some reason flask_mongoengine hates me? or am i missing something?
[07:39:36] <headphase> (this is python)
[07:48:00] <wwwi> KekSi: thanks
[07:48:44] <wwwi> i have 2 questions. First, are nosql dbs faster than sql dbs? And second, isn't difficult to work without schema?
[07:49:26] <wwwi> Actually, i have a third question too. can you easily add additional db servers if you want?
[08:50:00] <ams__> naf: Yes that does look like the code, but I'm not sure how to fix my problem from that
[09:10:54] <KekSi> wwwi: that all depends - in some use cases nosql servers are faster, working without a schema is obviously very easy (because the db just eats whatever you throw at it) and depending on your db of choice it can range from very easy to very hard to add more (either replicas or cluster members9
[09:11:38] <KekSi> depends on what you want to scale: read speed -> add replicas (or a cache), write speed -> cluster members (in case of mongodb by sharding)
[09:11:45] <wwwi> KekSi: i see.
[09:12:43] <KekSi> the structure is what makes nosql fast
[11:19:28] <toonsev> Hey guys is there a feature in bson to use the dot notation to drill down documents like you do in the mongodb shell?
[11:22:05] <toonsev> For example: Object value = bson.get("a.b") would be a short notation for Object value = bson.get("a", Document.class) != null ? bson.get("a", Document.class).get("b", Document.class) : null;
[11:22:33] <toonsev> I wrote a wrapper class for this but would like to know if there is already something that exists for this :)
[12:07:21] <ams__> So any pointers for forcing a sync? My mongo 3.2 instance just won't sync from my mongo 3.0 instance
[12:08:26] <ams__> I can consistently get this erorr: https://www.irccloud.com/pastebin/vqls11jL/
[12:08:35] <ams__> And it just sits there
[12:30:20] <lfamorim> Someone here experienced a bug when findAndModify begins to return null even when there are results?
[12:42:26] <ams__> And if I can't sync, what would be my next best option?
[14:50:58] <paulo_> Hey
[14:51:30] <Guest4187> what is the differece between mongodump and mongoexport
[14:55:03] <Derick> mongodump dumps into .bson files (binary format) and can be used with mongorestore
[14:55:21] <Derick> mongoexport dumps into .cvs or .json format, and can be used with mongoimport
[14:57:27] <StephenLynx> in general its advised to use mongodump, if you can.
[14:57:44] <StephenLynx> since export has to do some wizardry to export some formats.
[15:04:04] <xrated> Hi everyone, we're currently load testing our new application. We've optimized the individual collections and implemented the recommended performance tweaks from documentation. We're still hitting read limits however without using all resources available on the box. My googling points us towards switching to a sharded set.
[15:04:17] <xrated> Is sharding the best approach to increase read capacity?
[15:04:46] <xrated> (I know there's a whole lot of missing details and vagueness in there, was looking for other people's experience in similar situations)
[15:05:29] <Derick> are you using one data node, or more right now? For example, throught a replicaset?
[15:06:01] <xrated> It's a replicate set with 1 primary and 2 secondaries
[15:06:05] <xrated> replica*
[15:06:27] <Derick> did you play with reading from secondaries? (i.e., is lag OK?)
[15:07:18] <xrated> We were looking at that and the emphasis on eventual consistency made us do a double-take. MySQL replication is asynchronous and master/slave is still recommended, but Mongo's documentation seems very against using secondaries for read.
[15:07:24] <xrated> lag in the matter of a few ms is OK
[15:08:08] <Derick> yeah, loading your secondaries with reads, can actually make the lag more pronounced. Secondaries are mostly to provide redundancy and failover.
[15:08:17] <Derick> I believe we recommend sharding for read/write scaling.
[15:08:29] <Derick> But, you need to be really careful how you pick a sharding key.
[15:09:00] <Derick> You don't want to end up having to hit *every* shard on range queries, or something like that.
[15:09:23] <xrated> is there a standard sharding key to use when using object ids?
[15:09:29] <xrated> Or is it dependant on your data set
[15:09:39] <Derick> How do you need to look up documents?
[15:09:45] <Derick> ranges, one by one, something else?
[15:10:01] <xrated> Normally one-by-one or with an in-clause
[15:10:13] <Derick> by which key, _id?
[15:10:15] <xrated> Almost always on the document's ID for either scenario
[15:10:17] <xrated> Yes
[15:10:30] <Derick> how do your in-clauses look like? (as that could be a fan-out to all shards)
[15:10:54] <Derick> if it is really *just* _id, you perhaps want a hashed shardkey on _id
[15:11:09] <Derick> depending on your writes too...
[15:11:50] <xrated> Is there a document you would recommend reading for picking sharding keys, Derick ?
[15:12:00] <Derick> one sec
[15:12:53] <Derick> https://www.mongodb.com/blog/post/on-selecting-a-shard-key-for-mongodb
[15:13:27] <xrated> Great, we'll read this and go from there. Thanks!
[15:13:28] <Derick> https://docs.mongodb.com/v3.0/tutorial/choose-a-shard-key/
[15:13:31] <Derick> (not done yet)
[15:13:52] <Derick> The answer at http://stackoverflow.com/questions/12961873/whats-a-good-mongodb-shard-key-for-this-schema
[15:14:38] <Derick> that also has a few more links
[15:14:45] <xrated> Awesome
[15:14:46] <Derick> please note, they're not 100% up to date
[15:14:55] <Derick> but should still be good enough
[15:15:14] <xrated> We'll started from the blog posts and do our research, thanks for pointing us in the right direction.
[15:16:24] <Derick> you're welcome
[16:36:50] <saml> when using $group and $push, is there a way to limit length of pushed array?
[16:37:01] <saml> i want top 10 elements for each group
[16:38:35] <cheeser> https://docs.mongodb.com/manual/reference/operator/update/push/#modifiers
[16:38:37] <cheeser> $slice
[16:40:06] <saml> db.articles.aggregate([{$group:{_id: '$articleType', examples: {$push: '$url'}}]) I don't want example to go big
[16:40:50] <saml> i'm not $pushing and array field
[16:41:57] <cheeser> que?
[16:42:33] <saml> there are 20 different article types. there are 3 million articles. I want ~10 example articles per article type
[19:45:44] <poz2k4444> Hey guys, I'm trying to use mongo-connector to sync some collections to elasticsearch, but after some trouble
[19:46:17] <poz2k4444> I found out that mongo-connector can't handle array indexing, is there any way I can accomplish this?
[19:50:28] <n1colas> Hello
[22:32:52] <ModFather> MongoException: '\0' not allowed in key: \0 , any clue?