PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 3rd of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:14:45] <zejackal> is it possible to create a user that can create/delete databases, except for a specific one?
[00:15:05] <zejackal> and that specific one, the user can only readWrite
[00:16:42] <StephenLynx> I think so, I think you mostly set permissions per database.
[00:17:56] <MacWinne_> what's the simplest way doing a snapshot of a specific database and all it's collections? I want to run a script every 5 minutes to do this.. but I don't want to actually save a snapshot if nothing has changed
[00:18:09] <zejackal> i guess the trick is that the user should have the ability to create/drop databases on the cluster, so the names aren't known at user creation time
[00:18:14] <MacWinne_> it's basically a very infrequently updated configuration database
[00:18:32] <MacWinne_> is there some sort of internal counter on the database that indicates the last timestamp it was updated?
[00:23:16] <diegoaguilar> MacWinne_, I'm not sure of that
[00:23:31] <diegoaguilar> but there's a variable u can use to append date to a doc
[00:23:40] <diegoaguilar> Also, u mean per database or per collection?
[01:02:50] <Rou> MongoDB makes sql obsolete.
[01:13:10] <_syn> mongodb is like the public trash can on a street that everyone throws there trash into
[01:13:17] <_syn> sql is your own personal garbage bin at home
[01:13:27] <_syn> their*
[02:00:42] <acidjazz> hmm this could be fun
[02:16:36] <Jonno_FTW> hi, is it possible to store indexes on a separate disk with wiredtiger?
[02:17:20] <Jonno_FTW> ie. put a symlink to another drive
[02:59:22] <dekil> anyone knows how to tune up mongo cluster? i got a higher latency using some scenarios test
[05:32:10] <sahilsk> greetings
[05:34:07] <sahilsk> i created user 'mqtt' in admin db. I"m trying to authenticate against 'mqtt' database.
[05:34:13] <sahilsk> However, it's not working. https://gist.github.com/sahilsk/fa5ecea0451975d93b1a
[05:35:11] <joannac> you created a user in database "A" and are trying to auth and telling it to use database "B"
[05:35:30] <joannac> A and B are not the same
[05:46:58] <sahilsk> joannac: but while creating user i mentioned the db "mqtt"
[05:47:20] <sahilsk> ./mongo --port 27017 -u mqtt -p mqtt --authenticationDatabase mqtt
[05:47:41] <sahilsk> ^ i'm trying to auth against database 'mqtt' i.e B , same database for which i created user 'mqtt'
[05:47:43] <joannac> sahilsk: in the output of db.getUsers(), please point to the line that mentions mqtt
[05:48:19] <joannac> sahilsk: in fact, please point to the user "mqtt". There is no such user in your output.
[05:49:03] <joannac> Lines 1-4 in your pastebin show no users in the database mqtt
[05:51:20] <sahilsk> joannac: i created user inside 'admin' database. I understand. In my case database doesn't exist.
[05:56:36] <sahilsk> database 'b' exist later and i want the user to be ready before it. Is it not possible? joannac
[05:57:39] <joannac> you can create the user beforehand
[05:57:45] <joannac> but that will also create the database
[05:58:31] <joannac> your problem is you never actually created the user
[05:58:31] <sahilsk> joannac: so, procedure for this is this => 'use db_that_does_not_exist ; db.createUser(<B>) ' ?? like this
[05:58:36] <joannac> yes
[05:58:54] <joannac> and then when you auth, the --authenticationDB is db_that_does_not_exist
[06:07:14] <sahilsk> thanks :)
[06:37:48] <Painter> Hello, i have a problem with mongodb database after HDD problems - http://pastebin.com/kMPbv8BK . I think that the recovery has not happened. How can I remove the assertions like "Assertion: 10334:BSONObj size: -286331154 (0xEEEEEEEE) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: ObjectId('55e7a0a0350f020879f94bc1')" ?
[06:46:30] <joannac> Painter: That's evidence ofcorruption
[06:46:39] <joannac> It looks like the repair finished?
[06:50:42] <Painter> after that repair i get same assertions when i try to use this db... I try to remove entries with id in assertions...
[06:53:45] <joannac> what assertions?
[06:54:33] <Painter> "Assertion: 10334:BSONObj size: -286331154 (0xEEEEEEEE) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: ObjectId('55e7a0a0350f020879f94bc1')"
[06:56:20] <joannac> Does that make the mongod crash? or do you get the exception but the server stays up?
[07:35:56] <Painter> joannac: i repair my database by deleting some collections and then do --repair... Very fortunate that these tables hold not critical data like logs and stats.
[09:05:38] <jesusprubio> hey, could it have sense that a find query (with ".toArray()") returns 3 documents (ie) but the exlanation of this query says that nReturned is 1 ?
[09:05:52] <jesusprubio> explanation sorry :)
[09:53:03] <mbuf> I have a datadog user created for the admin database that has "roles" : [ { "role" : "read", "db" : "admin" }, { "role" : "clusterMonitor", "db" : "admin" } ] }. How can I update the same for the same role for a different database?
[09:59:04] <xorol> hi guys.. i got a mongodb around 40Gb it takes AGES to make a backup for this (around 2 days) using mongodbdump ;is there any alternative to get a fast way to backup this data?
[10:22:57] <cheeser> xorol: https://www.mongodb.com/cloud/backup
[10:24:04] <xorol> cheeser, more looking into a DIY solution
[10:24:23] <xorol> it is for backuping (in case of restore needs), and for refreshing test environments
[10:26:27] <xorol> but I do understand that mongodbdump is not the way to go in these cases I assume ..
[10:40:23] <m4k> Hello all getting Client Error: bad object in message: invalid bson type in object with _id: while restoring mongodump, what could be the issue?
[11:42:11] <revoohc> How do I force a full table scan on an indexed column? Backstory, we’ve upgraded from 2.2->2.6 recently and have data in an indexed column that can’t be indexed due to violating the length constraints. I’m trying to find and delete that data.
[11:56:40] <BadCodSmell> When using mongodump and mongorestore I get error: E11000 duplicate key error index
[11:57:02] <BadCodSmell> How is that possible?
[12:26:46] <m4k> getting Client Error: bad object in message: invalid bson type in object with _id: while restoring mongodump, what could be the issue?
[12:35:51] <sachinaddy> Hi. Could you please tell me how to do data partition in mongodb?
[12:36:13] <StephenLynx> sharding?
[12:37:43] <sachinaddy> StephenLynx: I'd like to keep data in a single server. I checked, is Sharding used for distribute data on different servers? Can we keep partitioned data in the same server?
[12:37:57] <StephenLynx> HM
[12:37:59] <StephenLynx> hm*
[12:38:07] <StephenLynx> yeah, I dunno.
[12:38:35] <sachinaddy> How to partitioned a big collection?
[12:38:51] <StephenLynx> sharding, but again, that would require multiple servers.
[12:39:08] <sachinaddy> So in single server, partition of collection is not possible?
[12:39:25] <sachinaddy> Just like we use to do in MySQL, Oracle etc.
[12:39:39] <sachinaddy> A table with different partitions.
[12:39:53] <sachinaddy> Similarly a collection with different partitions.
[12:40:33] <StephenLynx> again, I don't know.
[12:40:47] <StephenLynx> someone else more experienced might do.
[12:41:12] <StephenLynx> I just know I have never heard anything about using multiple sources for the same database on a single server
[12:41:43] <sachinaddy> Thanks StephenLynx , I'll wait here to see if someone else has a different view-point.
[12:57:59] <saml_> do you use mongo connector? it keeps failing. i use it to push data out from mongo to elasticsearch
[12:58:09] <saml_> what other tools do you use? custom script?
[12:58:34] <StephenLynx> I just use the node.js driver.
[12:59:35] <saml_> to monitor oplog yourself, right?
[13:00:14] <saml_> if your node process goes down, and oplog went too far, you can't resume. must reindex fully in that case?
[13:01:20] <StephenLynx> wat :v
[13:01:40] <StephenLynx> oplog?
[13:03:38] <saml_> StephenLynx, you mean you do db sync stuff in the same process as your http server process?
[13:04:01] <StephenLynx> db sync?
[13:06:00] <saml_> i mean web scale
[13:06:04] <saml_> you need some web scale
[13:06:28] <StephenLynx> are you kidding me?
[13:08:24] <saml_> i thought you were kidding
[13:08:54] <saml_> on any changes to mongodb, need to sync the change to elasticsearch. looking for a proper tool for that
[13:09:43] <saml_> and you mentioned node.js to do so. that sounded like you're writing to both mongodb and elasticsearch at the same time.. on http requests , which i assumed for nodejs being commonly used as http stuff
[13:11:40] <StephenLynx> ah
[13:11:46] <StephenLynx> you talking about elastic search
[13:12:14] <StephenLynx> nvm what I said then.
[13:33:47] <BadCodSmell> During mongorestore with --drop, I get duplicate key errors. WTF?
[13:35:54] <BadCodSmell> I see the problem
[13:35:57] <BadCodSmell> the actual indexes
[14:00:17] <BadCodSmell> mongorestore doesn't work
[14:00:32] <BadCodSmell> I still get duplicate keys with no indexrestore
[14:48:00] <saml> how can I calculate proper oplog size?
[14:48:14] <saml> http://docs.mongodb.org/manual/core/replica-set-oplog/#workloads-that-might-require-a-larger-oplog-size it just says you might need larger oplog for following
[14:48:20] <saml> might require.... wtf
[14:48:53] <saml> is there a way to calculate proper oplog size given mongod log file?
[14:53:10] <BadCodSmell> in mongodb tools why is every binary at least 10MB?
[14:53:16] <BadCodSmell> It's completely insane
[14:54:57] <StephenLynx> io.js binary is 17mb
[14:55:14] <StephenLynx> maybe they all static link some library
[14:55:21] <StephenLynx> so you can use them independently.
[15:05:14] <BadCodSmell> That's stupid
[15:05:25] <BadCodSmell> I should have a word with whoever is packaging this for debian
[15:06:34] <StephenLynx> are you using the default debian packages?
[15:09:45] <jayjo> I have a errno:61 connection refused... connection failed. Is there a very simple solution to this? I don't see any mongo processes running
[15:15:55] <jayjo> uh oh... there was an instance! solved
[15:45:36] <BadCodSmell> StephenLynx I tend not to use the default packages for critical components, debian of course wont increase versions for new features, yet will add loads and loads of random third party patches from whereever they can find them complicating compatibility and packaging.
[15:45:54] <StephenLynx> so you are not using debian's?
[15:46:04] <BadCodSmell> I'm using the mongo one :)
[15:46:07] <StephenLynx> ah
[15:46:16] <BadCodSmell> That switched from 10gen to a new repo recently for 3
[15:46:27] <StephenLynx> well, then it isn't a matter of the person packaging it for debian
[15:46:38] <StephenLynx> thats how 10gen compiles it.
[15:46:51] <BadCodSmell> Depends really/compilation/packaging are a bit intertwined
[15:47:15] <BadCodSmell> http://repo.mongodb.org/apt/debian wheezy/mongodb-org/3.0
[17:56:37] <mongdb> hi
[17:56:58] <mongdb> I am trying to install mongodb in ubuntu
[17:57:22] <mongdb> 15.4
[17:57:45] <mongdb> can someone help me to understand if the OS version is compatible
[17:57:52] <mongdb> with mongodb 3.0
[18:01:50] <mongdb> hi
[18:02:02] <mongdb> can someone plese help me
[18:07:16] <StephenLynx> its not oficially supported, afaik.
[18:07:29] <StephenLynx> so it won't be smooth to install from any repository.
[18:07:39] <StephenLynx> you might as well install a vm with centos 7
[18:07:44] <StephenLynx> mongodb
[18:10:37] <Rou> How do I change the mongodb configuration file?
[18:11:30] <Rou> Where is it located???
[18:29:08] <Rou> How do I set up node-express? I enabled HTTP interface
[18:29:16] <Rou> mongo-express*
[18:30:40] <yopp> hum
[18:32:03] <yopp> What happens when there two concurrent map/reduces are running, with output to the same collection?
[18:32:21] <yopp> I mean by design: first map/reduce locks the output collection?
[18:32:21] <StephenLynx> Row express is awful.
[18:32:24] <StephenLynx> Row*
[18:32:42] <yopp> And second will wait for release?
[18:40:39] <Rou> DBClientBase::findN: transport error: localhost:28017{ SSL: { sslSupport: false, sslPEMKeyFile: "" } }{ SSH: { host: "", port: 22, user: "", password: "", publicKey: { publicKey: "", privateKey: "", passphrase: "" }, currentMethod: 0 } } ns: db.$cmd query: { buildInfo: "1" }
[18:45:44] <dorkmafia> is there a way to get json from the mongo shell? I'm trying this: mongo --quiet mydb --eval 'printjson(db["mydb"]["mycollection"].find())'
[18:48:01] <Rou> What is the default username and password for authorization??
[18:48:33] <dorkmafia> it's running on localhost
[18:50:48] <dorkmafia> mongo mydb --eval 'printjson(db["mycollection"].find());'
[18:56:02] <deathanchor> dorkmafia: mongo --quiet mydb --eval 'printjson(db["mydb"]["mycollection"].find())'
[18:56:07] <deathanchor> sorry
[18:56:21] <deathanchor> dorkmafia: mongo --quiet mydb --eval 'printjson(db["mydb"]["mycollection"].find())' database_to_use
[18:56:42] <deathanchor> dorkmafia: mongo --quiet mydb --eval 'printjson(db.mycollection.find())' database_to_use
[18:56:53] <deathanchor> wow, brain farted
[19:13:59] <dorkmafia> when I change it to findOne I get some useful stuff
[19:14:11] <dorkmafia> but with find() it doesn't give me what I expect
[19:47:09] <deathanchor> dorkmafia: find returns a cursor
[19:47:18] <deathanchor> findone returns a json document
[19:47:37] <deathanchor> dorkmafia: mongo --quiet mydb --eval 'printjson(db.mycollection.find().next())'
[19:47:59] <deathanchor> to print all the json you will need to iterate over the cursor
[20:00:00] <dorkmafia> deathanchor: http://www.scotthelm.com/2013/09/28/command-line-json-with-jq.html I was trying to accomplish this
[20:00:09] <dorkmafia> but cursor.hasNext() is empty in my case
[20:01:41] <saml> what's second argument to Timestamp(1441310187, 58) ?
[20:01:52] <dorkmafia> and findOne returns {"_id" : ObjectId("53c67dcd30041eb10bdd22b8")} which isn't even valid json
[20:02:15] <saml> http://docs.mongodb.org/master/reference/bson-types/#timestamps no good description
[20:02:40] <saml> the second 32 bits are an incrementing ordinal for operations within a given second. no good reader
[20:59:56] <terminal_echo> if you had to
[21:00:09] <terminal_echo> represent mongo's array datatype in SQL....
[21:00:11] <terminal_echo> you would..
[21:01:18] <terminal_echo> do what..
[21:20:25] <stickperson> anyone here using pymongo with django?
[21:26:55] <no-thing_> hello, while doing batchInsert i get "Writes to config servers must have batch size of 1, found 4" , any hints?
[21:56:42] <daidoji> stickperson: our team is
[21:57:01] <daidoji> terminal_echo: jsonb column with an array?
[21:57:21] <daidoji> ANSI SQL actually I don't think proscribes anything for "array" elements in the database
[21:58:09] <daidoji> although you can find things like http://www.postgresql.org/docs/9.1/static/arrays.html
[21:58:16] <daidoji> terminal_echo: so good luck!
[22:00:18] <Rou> Is data integrity as bad as people say it is
[22:00:22] <Rou> Or has that been fixed?
[22:01:33] <daidoji> Rou: people in here are probably going to be biased
[22:01:44] <terminal_echo> its better than a join that crashes your server?
[22:01:45] <daidoji> however, the answer will always remain "what's your use case?"
[22:02:04] <Rou> daidoji: Well clearly I'm looking for confirmation bias
[22:02:05] <daidoji> terminal_echo: a join that crashes your server is a sign of an amatuer DBA
[22:02:26] <daidoji> Rou: you might do better with that question in #postgres then or #mysql
[22:03:34] <Rou> Is the data integrity issue true for all document based databases?
[22:04:09] <stickperson> daidoji: does this look like a reasonable way to create a connection for my views? https://dpaste.de/xhrR
[22:05:46] <daidoji> Rou: no, data integrity issues don't result from top-level abstractions (usually) but suffer from implementations and people using those implementations for things outside of teh intended use case
[22:06:35] <daidoji> stickperson: I suppose?
[22:07:25] <jmrpfio> Hey there. If mongodb doesn't have transactions, how can I reliably know that my data is being saved?
[22:09:08] <Rou> I'm getting a lot of hate on the internet whenever I mention MongoDB
[22:13:13] <daidoji> jmrpfio: you can't, however, document updates/inserts/deletes are atomic so as long as you're not trying to do 2+ writes that you need to succeed 100% of the time, you'll probably be okay
[22:18:54] <Milos> How come mongodb uses boost...? That's such a massive dependency.
[22:20:01] <morenoh149> Milos: it's popular enough
[22:21:39] <terminal_echo> daidoji: not really joins with big data don't really work
[22:22:24] <daidoji> Rou: a lot of people don't like it. However, #mongodb is probably not where you want to come if you're looking for more of all that
[22:22:41] <daidoji> Milos: what's wrong with Boost?
[22:23:12] <Milos> I'm now sitting and waiting for boost etc to compile, so I can use mongo. :-)
[22:23:56] <daidoji> terminal_echo: I've never had a join take down a server and at my last job we were doing 2billion transactions a day with 0% data loss and 5 9's etc...
[22:24:19] <terminal_echo> yeah transactions a day doesn't have anything to do with it but okay
[22:24:21] <daidoji> generally queries taking down servers are a failure of teh DBA or process within the development/QA team
[22:24:46] <terminal_echo> its the size of the two tables, not the number of transactions
[22:24:46] <daidoji> terminal_echo: only emphasizing that we were write-heavy
[22:25:20] <daidoji> terminal_echo: see above
[22:25:31] <terminal_echo> daidoji: see above your above
[22:25:53] <Milos> can I also see above?
[22:25:57] <daidoji> sure
[22:26:00] <Milos> great
[22:26:09] <terminal_echo> you are below the above above the above
[22:26:21] <jmrpfio> daidoji: so as long as i write one document at a time there's nothing to worry about?
[22:26:42] <deathanchor> anyone know what writebacklisten is about and how to verify it is not causing any contention on my collection?
[22:27:17] <daidoji> jmrpfio: well its actually a bit more dependent on context than all that, but for a simple answer probably yes
[22:27:37] <daidoji> jmrpfio: see http://docs.mongodb.org/manual/core/write-operations-atomicity/
[22:30:42] <jmrpfio> daidoji: thanks! i think i'm still getting used to thinking in documents
[22:30:50] <jmrpfio> it's been difficult
[22:31:44] <daidoji> jmrpfio: good luck
[22:50:06] <Milos> If I inserted a jsonArray into mongodb, so that mogo now only has "one" entry, can I parse this "one" entry to receive something more specific, for example the 0th entry object has an _id, of course, but it also has "default" which points to more json data, and so on. How could I query "default" out of this "one" "0th" entry? i.e. db.my_col.find(0) gives me the whole thing, can I now query for "default" within this?
[22:50:13] <Milos> s/mogo/mongo/
[22:51:22] <Milos> Maybe a better question: can I query a particular subtree of a large JSON object?
[22:52:18] <rendar> Milos: query how?
[22:52:38] <morenoh149> db.my_col.find(0).default.blah
[22:53:00] <Milos> I tried
[22:53:01] <Milos> > db.ain_settings.find(0).default
[22:53:01] <Milos> >
[22:53:06] <Milos> before I asked here
[22:53:09] <Milos> it returns nothing?
[22:53:33] <Milos> here's part of the object https://bpaste.net/show/bc931cdc0f2c
[22:53:45] <acidjazz> > { your: 'mom' }
[22:54:32] <Milos> rendar, does the paste help?
[22:54:39] <morenoh149> db.ain_settings.find({'default.blah':'somevalue'})
[22:54:53] <Milos> what if I don't know the value? I just want to _know_ the value
[22:56:03] <Milos> Maybe if I reverse the question. If I want the OUTPUT to be {"default": { .... } } how do I get it?
[22:57:06] <morenoh149> db.ain_settings.find(0,{'default.blah': 1}) simply returns the value
[22:57:34] <morenoh149> ^projection
[22:57:36] <Milos> ah that seems like what I want
[22:57:47] <morenoh149> https://docs.mongodb.org/manual/reference/method/db.collection.find/#db.collection.find
[22:57:49] <Milos> I'll read into the docs to find out what the second paramter meant
[22:57:59] <Milos> ahh
[23:01:12] <Milos> thanks for that morenoh149! it's exactly what I want.
[23:08:58] <pen> I am getting a lot of log when I startup mongod
[23:09:12] <pen> a lot of connection accepted from, and also a lot of insert command
[23:09:33] <pen> if there a way to remove them from displaying? or hide them?
[23:09:44] <pen> it is flooding and freezing my terminal
[23:10:59] <Boomtime> @pen: http://docs.mongodb.org/manual/reference/program/mongod/#cmdoption--logpath
[23:11:10] <Boomtime> also, you should be running the mongod as a service
[23:11:26] <pen> i'm running in a docker container
[23:11:51] <pen> how do you run as a service in a docker?
[23:12:16] <pen> @Boomtime thanks for the help but have some more questions ^
[23:13:18] <Boomtime> docker, dunno
[23:13:36] <Boomtime> are there docker forums?
[23:14:19] <pen> ok, just asking to see if you know, I think there is #docker channel here
[23:14:22] <pen> thanks
[23:16:52] <Boomtime> sure, someone here might know, the question is valid, but i'm sorry i can't help
[23:22:22] <zejackal> is it possible, when creating a custom role, to allow admin for all dbs (including new, not yet existing) except for one?
[23:22:38] <zejackal> or even based off a naming pattern
[23:27:47] <rusty78> Hi there, if I would like to simultaneously set all fields in a mongodb collection to 0 how would I bulk update all fields?
[23:28:03] <rusty78> For every document*
[23:30:06] <StephenLynx> db.collection.update({},{$set:{fielda:0,fieldb:0}})
[23:31:04] <rusty78> Thank you
[23:32:19] <deathanchor> has anyone written a vim syntax highlighter for mongodb logs?