PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 28th of September, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[14:51:47] <offlim> when i try to start the server over a ssl connection i see this message “waiting for connections on port 27017 ssl”
[14:52:53] <offlim> so i open a new window terminal window and type in “./mongo” but now im getting this error “network error while attempting to run command 'isMaster' on host '127.0.0.1:27017'”
[14:56:20] <offlim> can anyone help me?
[15:45:11] <mortal1> howdy gentlemen, does anyone know if there's a way of finding if a field exists in mongo using a regex to somewhat match it?
[15:45:59] <mortal1> i.e. db.collection.find({/abc/:{ $exists: trued}});
[19:05:19] <netcho> hi all
[19:05:53] <netcho> how can i import db from mongolab to local mongodb?
[19:05:57] <netcho> i tried
[19:06:08] <netcho> db.copyDatabase('from_mydb','to_mydb','ds040032.mongolab.com:40032',
[19:06:09] <netcho> '<myname>','<mypassword>')y
[19:06:11] <netcho> y
[19:06:18] <netcho> bu i get auth failes
[19:06:21] <netcho> failed
[19:06:52] <netcho> i use those credentials to connect from shell
[19:07:00] <shayden> are they bson or json? have you tried 'mongoimport' or 'mongorestore'?
[19:08:22] <netcho> no i havent
[19:09:36] <netcho> mongodum says auth failed
[19:09:44] <shayden> i guess you're trying to copy right from one db to the other, over the network? that could be difficult because you have all the networking firewalls, gateways and routing between local and remote, any of which could block you, it's probably easiest to see if you can download a mongodump or mongoexport, copy it locally, then use mongoimport or mongorestore
[19:10:23] <netcho> i opene everything on my local machine
[19:10:43] <shayden> i've never used mongolab, so i'm not sure how to get a mongodump or mongoexport from it
[19:13:02] <netcho> makes no sence... ewhen i use mongo ds0xxxxx-a0.mlab.com:xxxxx/my-db -u user -p password i get access to PRIMARY
[19:13:16] <netcho> but mongodump -h ds0xxxxx-a0.mlab.com:xxxxx -d my-db -u user -p password -o /home/ubuntu says auth failed
[19:17:32] <shayden> hmmm
[19:18:23] <shayden> --authenticationDatabase <dbname>
[19:18:23] <shayden> If you do not specify an authentication database, mongodump assumes that the database specified to export holds the user’s credentials.
[19:19:11] <shayden> maybe your auth database is different than 'my-db' ?
[19:19:47] <shayden> i vaguely remember something like this tripping me up before
[19:20:18] <shayden> maybe try this:
[19:20:34] <shayden> mongodump -h ds0xxxxx-a0.mlab.com:xxxxx -d my-db -u user -p password --authenticationDatabase admin -o /home/ubuntu
[19:20:40] <shayden> i think admin is the default?
[19:20:49] <netcho> should be yeah
[19:20:52] <netcho> let me trey
[19:20:58] <netcho> try
[19:22:07] <netcho> moving formward :)
[19:22:13] <shayden> :D
[19:22:18] <netcho> connected to: xxx
[19:22:20] <netcho> assertion: 18 { ok: 0.0, errmsg: "auth failed", code: 18 }
[19:22:29] <shayden> D:
[19:23:11] <netcho> actually its the sam as withouth --authdb
[19:23:18] <GitGud> hey there, i have a question. so in my app if someone makes a post on something or a comment somewhere it goes to that post db entry and writes to that in an array of objects. its all well and good but what would happen theoretically if 2 people were to put a comment down on the same post at the same time?
[19:23:43] <GitGud> if it gets write locked isnt that bad? is there a way to tell it to wait some time for the write lock to come off? or will the app just get errors ?
[19:23:58] <netcho> dbCopy gave me "errmsg" : "unable to login { ok: 0.0, errmsg: \"auth failed\", code: 18 }"
[19:23:58] <shayden> hmm
[19:24:21] <StephenLynx> GitGud,
[19:24:26] <GitGud> StephenLynx,
[19:24:32] <StephenLynx> that depends on how you are doing the write.
[19:24:41] <StephenLynx> are you doing a $push or a $set?
[19:24:43] <shayden> netcho: maybe try mongo ds0xxxxx-a0.mlab.com:xxxxx/my-db -u user -p password
[19:24:58] <shayden> and once you're at the prompt> show dbs
[19:25:15] <GitGud> StephenLynx, $push
[19:25:16] <shayden> otherwise, i'm not sure, check the mlab docs or help
[19:25:19] <StephenLynx> then it will be fine.
[19:25:25] <GitGud> because $push deals with pushing to an array
[19:25:32] <netcho> "errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
[19:25:33] <GitGud> whereas $set is a replacement
[19:25:33] <netcho> wooooot
[19:25:34] <GitGud> right ?
[19:25:40] <StephenLynx> it will queue the second operation.
[19:25:45] <StephenLynx> and push the element.
[19:26:02] <netcho> thats i only user and its owner.. do i need to grant him permissions?
[19:26:11] <StephenLynx> locks don't give your application code errors, they queue the operations.
[19:26:15] <GitGud> StephenLynx, is there a reverse version of the $push ? as in you delete an element but still not get affected by writer's lock ?
[19:26:24] <GitGud> hehe, get it? writer's lock
[19:26:24] <GitGud> ok
[19:27:15] <GitGud> $pull ??
[19:27:22] <StephenLynx> don't remember now.
[19:27:32] <GitGud> wow they thoght of everything
[19:28:03] <GitGud> StephenLynx, well there is a $pull, that removes from an element, but i'm not sure if that operation is still "write lock safe"
[19:28:14] <GitGud> as much as $push is
[19:28:15] <StephenLynx> they all are.
[19:28:30] <StephenLynx> all update operations lock.
[19:29:01] <GitGud> cool! thank you StephenLynx
[19:32:10] <GitGud> StephenLynx, https://docs.mongodb.com/manual/reference/operator/update/set/
[19:32:22] <GitGud> apparently $set is an update operation too
[19:32:33] <StephenLynx> no
[19:32:38] <StephenLynx> its an operator.
[19:32:43] <StephenLynx> update is the operation.
[19:32:48] <shayden> netcho: hmm, it looks like mlab lets you pull a mongodump through the web portal... that might be easier than guessing at remote authentication
[19:33:01] <shayden> http://docs.mlab.com/backups/
[19:33:08] <GitGud> StephenLynx, so some update operations are write lock safe whereas others are not?
[19:33:39] <StephenLynx> no
[19:33:42] <StephenLynx> all are safe.
[19:33:45] <StephenLynx> and all lock.
[19:33:50] <StephenLynx> update, findandmodify
[19:34:32] <GitGud> StephenLynx, so if i do 2 $push at the same time, it will queue up and do them one after the other
[19:34:35] <StephenLynx> yes
[19:34:47] <StephenLynx> and it doesn't matter if its a push set or unset
[19:34:48] <GitGud> but if i do 2 $set at the same time, it wont queue up? and throw errors?
[19:34:53] <StephenLynx> again
[19:34:57] <StephenLynx> the operator doesn't mattter
[19:35:19] <GitGud> wait then why did you ask if i am doing a $push or a $set ?
[19:35:28] <GitGud> if they both queue up anyway
[19:35:32] <StephenLynx> because
[19:35:36] <StephenLynx> if they were two sets
[19:35:43] <StephenLynx> the later would overwrite the first one
[19:36:01] <GitGud> whereas in the $push, both would come in
[19:36:07] <GitGud> one after the other
[19:36:10] <GitGud> cause its an array
[19:36:18] <StephenLynx> exactly.
[19:36:22] <GitGud> i get it. ok
[19:36:26] <GitGud> thank you stephen
[19:36:28] <StephenLynx> np
[19:36:50] <netcho> shayden: yeah oly to s3 :) but i will give it a shot
[19:40:08] <AlmightyOatmeal> is there any way to do batch find requests? doing a simple find() over an entire collection gives one result at a time and is becoming painfully slow :(
[19:40:25] <StephenLynx> wait wat
[19:41:25] <AlmightyOatmeal> i'm puling data from mongodb and bulk-inserting it into elasticsearch but the collections are anywhere from 15M to 30M documents
[19:41:48] <StephenLynx> ok
[19:42:07] <StephenLynx> and
[19:42:44] <GitGud> damn boy work that db
[19:45:01] <StephenLynx> btw gitgud, you got a repo for your project?
[19:45:07] <AlmightyOatmeal> its taking ~30s for each 10k doc chunk :(
[19:45:19] <GitGud> StephenLynx, honestly i would. but i cant because of NDA and crap
[19:45:44] <StephenLynx> kek
[19:45:57] <GitGud> its a basically user posts and when a user posts there it pushes to that db thats it
[19:45:57] <StephenLynx> AlmightyOatmeal, show me your query?
[19:46:04] <GitGud> pushes to an array in the db*
[19:46:38] <offlim> When i use “mongod --enableEncryption --encryptionKeyFile data/encryrest/mongodb-keyfile” I keep seeing this error “Unable to retrieve key .system, error: There are existing data files, but no valid keystore could be located.”… the current db doesnt how any data though
[19:47:59] <AlmightyOatmeal> StephenLynx: there is no query, it's literally find({}) -- i'm pushing everything into elasticsearch
[19:48:30] <StephenLynx> is that a one-time operation?
[19:48:34] <AvianFlu> AlmightyOatmeal, I actually found a reference to a parallel collection scan api the other day, hang on and let me try to find it
[19:48:47] <AlmightyOatmeal> StephenLynx: for the time being, yes. this is the initial move.
[19:48:48] <AvianFlu> no guarantees and I've never used it, but I thought of you when I saw it
[19:49:02] <AlmightyOatmeal> AvianFlu: that sounds wonderful :)
[19:49:04] <AvianFlu> https://docs.mongodb.com/manual/reference/command/parallelCollectionScan/
[19:49:10] <AvianFlu> yeah I mean check it out before you get excited
[19:49:12] <StephenLynx> I would move in batches and keep track of the latest moved document.
[19:49:17] <AvianFlu> but it sounded like it was for the problem you have
[19:49:33] <offlim> do I need to create a new create a encrypted database?
[19:49:36] <StephenLynx> I assume that many docs would take way too much RAM
[19:50:48] <AlmightyOatmeal> AvianFlu: that looks like something i would have liked some time ago :) i'll play around with that and see if that will give me a boost for my large queries and scans :)
[19:51:44] <AvianFlu> yeah I'd never heard of it before, I just stumbled upon it while looking for something mostly unrelated
[20:01:19] <offlim> anyone?
[20:28:23] <offlim> nobody here is familiar with encryption at rest?
[20:28:36] <StephenLynx> at rest?
[20:29:00] <cheeser> enterprise only feature
[20:29:20] <offlim> yes. https://docs.mongodb.com/manual/core/security-encryption-at-rest/
[20:30:49] <StephenLynx> dank
[20:34:04] <offlim> dank?
[20:34:10] <StephenLynx> :v
[21:34:35] <manfrin> anyone here well versed in unique indexes for mongo?
[21:34:55] <manfrin> I've encountered a bug that I think is two different write ops writing the same _id both being returned successfully
[21:35:09] <manfrin> when I expect one to error out for using a duplicate key
[21:35:41] <manfrin> it has worked 99.999%, but I just encountered a few that got through
[21:36:08] <manfrin> my code writes the _id and then checks the InsertOneWriteOpResult to see if InsertedCount > 0
[21:36:34] <manfrin> is it possible (like if 100 processess all tried to write the same thing at once) for mongo to return true for >1?
[23:22:45] <GitGud> hi
[23:22:59] <GitGud> is there way to do query on 3 things at once ?
[23:23:03] <GitGud> same field
[23:23:06] <GitGud> different value
[23:23:29] <GitGud> like say i want to query username GitGud, username OwnCoder, username Someoneelse and get back a result set of 3
[23:23:34] <GitGud> in same collection
[23:23:35] <StephenLynx> $or, maybe?
[23:23:56] <StephenLynx> or an aggregate to group the results.
[23:23:58] <GitGud> except i want to get that in an array
[23:24:06] <GitGud> not 1 out of 3
[23:24:08] <GitGud> but 3 out of 3
[23:24:11] <StephenLynx> again
[23:24:13] <StephenLynx> $or
[23:24:22] <StephenLynx> even better
[23:24:26] <StephenLynx> $in
[23:24:32] <StephenLynx> you give an array of valid values for the field
[23:24:40] <StephenLynx> and it will match any any of them
[23:24:49] <GitGud> $in sounds more appropriate
[23:34:25] <GitGud> StephenLynx, $in worked!. thank you <3
[23:35:21] <StephenLynx> kek
[23:49:25] <Rc43> Hi