PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 4th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:04:23] <svm_invictvs> ananelson: GET /favorites seems to return things when I haven't favorited anything.
[03:03:52] <_syn> hey
[03:03:58] <_syn> how do i force a secondary to become primary
[03:04:02] <_syn> when there is no primary available
[03:04:04] <_syn> for some reason
[03:19:46] <_syn> nvm figured it tout
[03:26:55] <Boomtime> "_syn: how do i force a secondary to become primary" <- you don't - you fix the underlying reason that is preventing a primary
[05:09:51] <Fetch> does anyone have experience automating installs of Ops Manager?
[05:10:22] <Fetch> From what I can tell, the requirement to create a user via web registration (to create the superuser) pretty much makes it impossible to automate cleanly
[05:10:27] <Fetch> but I'd love to hear differently
[05:26:24] <Boomtime> @Fetch: ops manager is a by-subscription product per install, it sounds almost like you're asking to do something that is illegal
[05:26:59] <Boomtime> or are you asking how to automate deployments VIA ops manager?
[05:53:37] <Fetch> Boomtime: we have a subscription for it, and it's not at all illegal to install for non-production use
[05:55:29] <Fetch> currently it looks like I'm going to just keep a dump of the mms backup db around (it's pretty small) after doing an initial login, and in my automation do a mongorestore
[09:00:47] <arussel> I've restarted a node and got a message: ""still syncing, not yet to minValid optime 55e9151c:1". Is it going to be in sycn at some point or is it too late ?
[09:27:49] <schlitzer> hey, is it possible to assign multiple roles to a user on a single database?
[09:33:13] <arussel> is there a way to create multiple index in once command on the same collection ?
[09:51:48] <repxl> i need to create index for field "username" everytime i insert a new user ?
[09:52:20] <arussel> no, you create the index once
[09:54:11] <repxl> arussel ok good, is there any good free mongodb client program where i could see nice the data i have in database also the indexes visible ? instead of searching in cli mode
[09:54:29] <repxl> arussel for mysql i used navicat .. pretty nice visualization of database.
[09:56:05] <kali> repxl: there are options here http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[10:04:13] <ikb> repxl: http://dbeaver.jkiss.org/
[10:04:14] <ikb> try
[11:46:17] <Trudko> Hi guys I am trying to import simple json but i get error JSON array is too large: code FailedToParse: FailedToParse: Expecting 24 hex digits: ereAiFkyKYBjGfKxq: offset:179 . Here is json import and under that there is export from existing db http://pastie.org/10396692
[11:47:21] <StephenLynx> have you tried mongodump and mongorestore?
[11:47:57] <StephenLynx> afaik, mongodump exports bson, which is 100% correct under mongo. mongo export converts it to json, which might not be as valid.
[11:48:01] <StephenLynx> and cause issues.
[11:52:18] <Trudko> StephenLynx: Hi Stephen , dump and restor wont work because I am porting existinng csv file to mongo db
[11:52:29] <Trudko> the export I've used was for data created through application
[11:52:30] <StephenLynx> ah
[11:53:27] <revoohc> anyone know why a background index build would cause the lock percentage to report > 100%?
[11:56:06] <kali> revoohc: yeah, it's because some documents are going twice through the indexing process
[11:56:33] <kali> revoohc: it's because they have been updated, grown and moves at the end of the documents files
[11:56:44] <kali> revoohc: it is harmless, except breaking the progress report
[11:57:12] <Trudko> ps command for import is mongoimport --port 3001 -d meteor -c post --file posts-import.json --jsonArray
[11:57:15] <revoohc> kali: I thought a background index was supposed to be non-locking. So am I locking my db, or just a bad report (v2.6.10)
[11:57:37] <revoohc> I’m getting the data from mongostat
[12:03:51] <Trudko> if I remove "$oid": "ereAiFkyKYBjGfKxq" , import successed
[12:06:11] <kali> revoohc: i don't understand what you are asking
[12:16:07] <revoohc> kali: while the index is building, mongostat is reporting the db being locked 100% of the time. Is this a bug in mongostat, or is the background index build really causing the db to be locked up. Normally this db is locked ~20%.
[12:16:58] <kali> revoohc: the background index locks, but yields the lock for the rest to get a chance to run
[12:17:23] <kali> revoohc: so it's to be expected. the index use the 80% or so available lock time to build
[12:19:11] <revoohc> kali: thanks for clearing that up for me.
[13:29:17] <repxl> i letting my users choose whatever username they want including big and small letters .... but Alen and alen will be unique and can't be used how to find with no matter if it's a big and low letter
[13:32:35] <repxl> i will not use regex cuz i use indexing
[13:34:00] <kali> repxl: you need to denormalize to a second field all lowercase
[13:35:07] <repxl> kali like real_username field with "AlaN" and other field lower_username_for_comparsion with "alan" :( so i need 2 fields
[13:36:27] <kali> yes
[13:59:02] <MXXN> Would using Mongo as filestorage for a webapplication with millions of files be a good idea? (files must be locally stored) We have had many problems using the filesystem, so we want to move away from just files.
[14:15:06] <repxl> i have a users collection, now i want to make a new collection like "notes" now in the user collection i want to make a array field "notes" which holds all the notes documents that has the user "notes" : [object_id, object_id, object_id .... ] but my question is what is when the user document exceeed 16 MB i must think in future...
[14:15:56] <repxl> like when a user has 1milion notes for example the user document will be quite long even if i store only the object ids of the notes documents
[15:07:59] <rbw> hello. I'd like to copy a document into a field in another collection. Is this possible?
[15:13:22] <StephenLynx> hm
[15:13:32] <StephenLynx> not in a single operation.
[15:13:39] <StephenLynx> you will have to do it manually in application code.
[15:13:42] <StephenLynx> rbw
[15:23:41] <rbw> I've tried fetching the document, converting it into a json string and inserting it. but that doesn't work well. any ideas?
[15:24:01] <StephenLynx> you don't have to convert to string.
[15:24:16] <StephenLynx> you can just set it as it is on a field of the document.
[15:24:19] <StephenLynx> like
[15:24:38] <StephenLynx> collection.update({condition to match goes here},{$set:{field:document}})
[15:26:22] <repxl> i save different apples and each apple has tags later when the db is big i want to track tags like "top 10 used tags" do i have to store tags differently in one document or what approach
[15:28:05] <rbw> StephenLynx: I'm using mongoengine as ODM btw, but yeah, you're right. you gave me some inspiration there. thanks :P
[16:49:02] <zejackal> is it possible, when creating a custom role, to allow admin for all dbs (including new, not yet existing) except for one?
[16:54:38] <cheeser> i don't think so
[16:57:47] <zejackal> can't use a $regex for maybe the db field?
[16:59:13] <cheeser> i don't think so, no
[17:07:13] <zejackal> ok, thanks
[18:56:59] <jr3> adding a new property to a collection with 10M documents, is going to take long no matter how it's done right?
[18:57:24] <StephenLynx> not really.
[18:57:32] <StephenLynx> are you going to update ALL the documents?
[18:57:36] <jr3> yes
[18:57:39] <StephenLynx> hm
[18:57:46] <StephenLynx> then yeah, it could take a while.
[18:57:55] <jr3> we actually need to add a property and add indexes
[18:58:03] <jr3> would dropping all indexes first help at all?
[18:58:27] <StephenLynx> probably not, it will take even longer, IMO, when you have to recreate all indexes again.
[19:21:21] <akoustik> is there a built-in way to "truncate" a collection? i.e. "remove all but n documents"
[19:21:46] <akoustik> without caring about anything special about the documents remaining
[19:23:15] <StephenLynx> I would first group the _id of the documents to not be deleted
[19:23:43] <StephenLynx> and then run a remove using {_id:{$nin:theIdsArray}}
[19:23:58] <akoustik> ah, ok, so i can do a .find.limit(total - n) and keep their ids
[19:24:05] <akoustik> perfect
[19:24:23] <akoustik> er, find(), but yeah ok.
[19:24:35] <akoustik> good call
[19:28:09] <boboc> guys is there a limit except the 16mb for the document in mongo?
[19:29:44] <boboc> if a lot of html content is stored in a property there is an error: "QUERY SyntaxError: Unexpected identifier" .if i reduce the characters/content, it works.
[19:31:28] <StephenLynx> how are you storing it?
[19:31:36] <StephenLynx> probably the text is escaping it.
[19:31:49] <StephenLynx> I suggest you use gridfs to store html.
[19:33:44] <boboc> StephenLynx: thanks
[19:53:26] <tp43_> Hi, I was having trouble with mongo, it wouldn't show databases, maybe it was full, not sure, I just did apt-get purge and reinstalled it, but same thing, so then I deleted /etc/mongodb.conf, did apt-get purge and re-installed it, but it is now giving error "ExecStart=/usr/bin/mongod --config /etc/mongodb.conf (code=exited, status=1/FAILURE)"
[19:58:19] <tp43_> it was full, I and uninstall/re-install was't deleting the databse, so I have to manually gointo /var/lib/mongodb/ directory and delete them, now all is fine
[20:06:35] <toastynerd> Is there documentation for the wire protocol requests that happen on a client connection or someplace where I can find out more about that process? The wire protocol doc doesn't quite go over that process.
[20:17:14] <windsurf_> is cursor.forEach synchronous?
[20:22:44] <zejackal> any thoughts on running mongo instances in lxc containers? maybe two on the same machine
[20:23:30] <StephenLynx> I wouldn't use containers in the first place.
[20:23:45] <StephenLynx> http://www.boycottdocker.org/
[20:23:46] <zejackal> neither would i. this is for a test environment
[20:23:51] <StephenLynx> still
[20:23:55] <StephenLynx> not worth the trouble
[20:24:28] <zejackal> so what would you do. to isolate multiple mongo processes. multiple servers? sounds expensive to me.
[20:24:38] <StephenLynx> VMs
[20:24:48] <zejackal> vagrant with virtualbox
[20:25:02] <StephenLynx> vagrant? whats that.
[20:25:50] <zejackal> https://www.vagrantup.com
[20:26:06] <zejackal> but you are suggesting vm over containers