PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 3rd of November, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:10:45] <daidoji1> hello, in pymongo is it possible to have multiple '$addToSet' operations in an update?
[00:10:49] <daidoji1> and how might I pass that in?
[00:11:06] <daidoji1> currently its messing up because those 'addToSet' strings are treated as keys in a python dict
[01:42:51] <sabrehagen> i'd liek to run db.currentOp() to find long running tasks. this blog post suggests i need to be an admin user: http://blog.mongolab.com/2014/02/mongodb-currentop-killop/
[01:43:38] <sabrehagen> i've authed against my admin daatabase and my role is 'userAdminAnyDatabase' but runnign db.currentOp() returns { err : 'unauthorized' }
[01:43:43] <sabrehagen> where might i be going wrong?
[02:52:42] <Boomtime> sabrehagen: hi, sorry for the delay, is your question still relevant?
[02:53:28] <Boomtime> in short, userAdminAnyDatabase is a role for administering users, it can't do much in the way of data or realtime control type access
[02:53:58] <Boomtime> however, that role does permit you to either grant yourself the extra needed privileges, or to grant somebody else such privileges
[02:55:22] <Boomtime> the most restrictive built-in role that includes the ability to currentOp is clusterMonitor
[02:56:05] <Boomtime> if you don't want all the extra stuff that comes with, you can just grant yourself the specific privilege to issue the 'inprog' action (which is the real command behind currentOp)
[03:20:49] <sabrehagen> Boomtime: thanks, still relevant and helpful answer
[03:21:53] <sabrehagen> i'll try granting myself that privilege and seeing how i go
[03:23:03] <sabrehagen> also, in 2.6 .explain() returned a very different result to explain() in 3.0. i've read about the changes on mongodb.org, but can't find out how to run the same style of explain() as 2.6. i'd like to know the cursor type and nscanned* stats in 3.0
[04:23:50] <cppking> hello guys , in a 3 nodes repl-set , 1 master and 3 secondary, if one secondary is down , at the meantime, master failed, will this repl set still work normally?
[04:25:56] <Boomtime> "in a 3 nodes repl-set , 1 master and 3 secondary" <- are there 3 nodes, or 4?
[04:26:54] <Boomtime> in either case, if 2 members fail, then you don't have a majority - also, avoid the term 'master' as that was used in the old system, the correct term is primary
[04:31:25] <cppking> sorry, 1 primary and 2 secondary
[04:31:58] <cppking> one secondary is down , then primary failed , will this replset still work normally
[04:32:05] <cheeser> nope
[04:38:05] <joannac> it will work for reads (assumign you allow secondary reads) but not writes
[06:03:36] <cppking> joannac: what if 1 primary and 3 secondary , primary failed when one secondary is down , after election , while this repl still work normally?
[06:04:26] <sabrehagen> in 2.6 .explain() returned a very different result to explain() in 3.0. i've read about the changes on mongodb.org, but can't find out how to run the same style of explain() as 2.6. i'd like to know the cursor type and nscanned* stats in 3.0
[06:11:15] <btcbuy314> I'm making a music playlist website and I have two tables, one which stores songs, and the other, playlists, which stores title of playlist and an array of _id of different songs
[06:11:37] <btcbuy314> I'm having a lot of trouble updating the order of the array
[06:12:04] <btcbuy314> how would I do this? Everything seems to have no errors, but then the database table is unchanged
[06:27:50] <phreelynx> @btcbuy314: please elaborate "I'm having a lot of trouble updating the order of the array"
[06:41:05] <btcbuy314> so basically, i have an array of ids saved in the playlist table, I iterated through the array, changed each one to the next one and the last to one before
[06:41:36] <btcbuy314> and the last one to the first*
[06:41:41] <btcbuy314> and save
[06:41:44] <btcbuy314> but no change in output
[06:44:39] <phreelynx> how about having a separate key, in the playlist documento, for the current_song which contains the index of the song in the playlist array. Once completed you can just change current_song into index+1
[06:46:01] <btcbuy314> no I want to change the order of the array
[06:46:27] <btcbuy314> im not actually changing the order to +1 , i just said that to make it simpler
[06:46:54] <btcbuy314> im trying to implement a reordering of the playlist
[06:47:01] <btcbuy314> so number 5 could get moved to number 2
[06:49:56] <btcbuy314> http://pastebin.com/Cscwgs8A
[06:50:56] <btcbuy314> heres a pastebin, the msg.indexList is basically a list the same length as doc.videos but whats in it would be something like [0,2,1,4]
[06:51:03] <btcbuy314> to show that the new order needs to be that
[06:53:39] <btcbuy314> something that I think will help solve this is doc.videos.pop() sucessfully deletes the last video, but if I try to duplicate the first video with doc.videos.push(doc.videos[0])
[06:53:44] <btcbuy314> it wont duplicate
[06:54:05] <Silverfall> if I have 100000 documents and I want to check the one with the highest "{ size: ...}" value, does limiting search results increase my performance or it's useless since mongo has to iterate over the whole collection anyway
[06:54:44] <preaction> but it only has to ship to you the one that matches
[06:54:46] <phreelynx> btcbuy314: have you tried $slice'ing it ?
[06:55:00] <btcbuy314> not really search what you mean by that
[06:55:01] <preaction> Silverfall: and if you find yourself making that query often, you might want an index
[06:56:41] <btcbuy314> phreelynx: What do you mean?
[07:02:49] <Silverfall> preaction so even if I already have a running database full of "size" values, entering this single command in the shell "db.mycollection.createIndex( { size: 1 } )" would allow me to permantently keep them ordered by size without making mongo work too much?
[07:03:35] <preaction> it creates an index that's ordered by size. as for how much work happens, well, that's the trade-off. the index must be updated when a write happens
[07:04:40] <Silverfall> so unless I have an amount of X documents it isn't worth to make the index to improve overall performance.. could you define the value of X in this case?
[07:05:24] <preaction> no. that's what benchmarking is for
[07:05:44] <preaction> there are too many variables for me to even fathom a guess at what one of them, X, could be
[07:06:17] <Silverfall> well, I see two variables involved: the frequency I write new documents and the frequency I check the whole collection ordered by size
[07:07:02] <preaction> you're missing: the amount of memory used to hold indexes. the size of the documents that must be scanned. the speed of the disk used for storage.
[07:07:25] <preaction> the number of shards that must keep up-to-date indexes
[07:07:32] <Silverfall> ahahah, I think I'm just gonna leave it like it is now.. if it starts to slow down I'll apply an index XD
[07:07:32] <preaction> i'm sure there're more
[07:07:40] <Silverfall> thanks for the tip tho
[07:07:50] <preaction> or just whip up a dev environment, which you should already have, and run some benchmarks
[07:09:42] <cppking> In mongo 2.x , Does every insert or update action will raise an implicit lock which is database level??
[07:09:44] <Silverfall> there are too many variables, like user behaviour, that I cannot predict programatically
[07:27:27] <Iskandar> morning
[07:27:47] <cppking> Iskandar: morning
[07:28:13] <cppking> In mongo 2.x , Does every insert or update action will get an implicit lock which is database level?
[07:29:08] <nishu-tryinghard> guys i made dump of a remote db and i want to restore it locally and use it.
[07:30:59] <cppking> nishu-tryinghard: what's the problem?
[07:31:55] <nishu-tryinghard> cppking, nvm i was just new so i thought dump will directly make a db locally. I came to know that i need to use restore as well to make a copy from the dump.
[07:32:55] <cppking> yes, you need restore from dumped files
[07:33:06] <nishu-tryinghard> cppking, can i use mongorestore without --dbpath param so that it can use the default one?
[07:33:40] <cppking> better add it
[07:34:25] <nishu-tryinghard> it worked without dbpath parms
[07:34:49] <nishu-tryinghard> restore done ty
[07:53:26] <Iskandar> hey cppking
[07:55:21] <Iskandar> how do i specify witin the mongo.conf that it should be run a s a configserver?
[07:56:55] <Iskandar> manual says that i should run the mongod from the command line, but i want to use the mongod as a service using the conf file
[07:59:07] <cppking> Iskandar: maybe it's sharding.clusterRole=configsvr
[07:59:22] <cppking> https://docs.mongodb.org/manual/reference/configuration-options/
[08:00:56] <Iskandar> thanks :-)
[08:05:33] <mylord> what’s the recommended way to install MongoDB on CentOS?
[08:05:53] <malware> add repo install via yum
[08:06:14] <malware> https://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat/
[08:39:52] <Iskandar> hmmm, the yaml conf fails with redhat service mongo init script: dirname: missing operand
[08:40:05] <Iskandar> using mongodb3.0.6 on rhel 6
[08:40:26] <Iskandar> is there a non yaml version of that manual somewhere?
[08:57:10] <sabrehagen> how do i tell what type of cursor a query is using with mongodb 3.0? it's not shown when using .explain() for me
[09:00:27] <cppking> sabrehagen: maybe you need to pass verbosity modes to explain function
[09:00:28] <cppking> https://docs.mongodb.org/manual/reference/method/cursor.explain/
[09:16:16] <Iskandar> how do i check if my conf is valid ? i get a failed but cant find where the conf error is
[09:17:46] <csst0111> Somehow i messed around with my db and trying to re-import the json files gives me : "error inserting documents: E11000 duplicate key error index: ..."
[09:20:22] <cppking> Iskandar: check the mongod error log
[09:20:52] <Mindiell> hi there, is the difference between distrib packages and .deb packages is really a problem ?
[09:21:05] <Mindiell> I would just like to install and test mongoDB
[09:21:16] <Mindiell> should I go directly on .deb packages ?
[09:22:10] <cppking> Mindiell: yes, you should
[09:22:22] <Mindiell> ok, thx
[09:22:42] <Mindiell> answer is clear ;o)
[09:25:02] <Iskandar> thanks
[09:25:05] <Iskandar> got it :)
[09:25:49] <Iskandar> btw, on redhat im missing the mongos service in /etc/rc.d/init.d since i want ot run a mongos and config server on one machine for testing purposes
[09:29:24] <Mindiell> Iskandar: which version ?
[09:30:55] <cppking> Iskandar: just copy /etc/rc.d/init.d/mongod to /etc/rc.d/init.d/mongos then do some change on it
[09:30:56] <Iskandar> rhel6
[09:31:15] <Mindiell> Iskandar: I was speaking about mongo version ;o)
[09:31:29] <Mindiell> "As of version 3.0.7, there are no init scripts for mongos."
[09:31:38] <Iskandar> im using 3.0.6
[09:31:47] <Mindiell> ah
[09:31:48] <Iskandar> thats the lastest version for rhel6
[09:31:51] <Iskandar> from repos
[09:32:41] <Iskandar> but i only need to change the configfile location right?
[09:39:24] <cppking> Iskandar: you said you're trying to setup a cluster in one same node
[09:39:58] <Iskandar> sort of, i have 3 machines, i want to use 1 machine for the router and configdb and the other 2 as shards
[09:40:23] <Iskandar> no need for replicasets, since its a development environment
[09:41:09] <cppking> Iskandar: you want setup config-server and router at same node?
[09:41:15] <Iskandar> yup
[09:41:31] <Iskandar> i have the configserver running on port 21019
[09:41:39] <Iskandar> so now i want ot bootup the mongos
[09:41:58] <Iskandar> i just copied over mongod initscript and changed the configlocation
[09:42:08] <Iskandar> but im wondering if i need to do more
[09:42:11] <cppking> you need to install another mongodb server in seperate dir like /opt/ , then boot it up
[09:42:17] <Iskandar> you know the pids and such
[09:42:49] <Iskandar> i want it to run it from the service and not just from the command line
[09:42:52] <cppking> how you installed your current mongodb server, deb package?
[09:42:57] <Iskandar> yum
[09:43:05] <Iskandar> which is rpm
[09:43:34] <cppking> so you need to install another mongodb-server from tarbal, then configurate is as a router server
[09:44:11] <cppking> configure it as a router
[09:45:33] <Iskandar> ok
[09:47:16] <Iskandar> so running two init scripts is a nono unless i install two mongodb's
[09:49:02] <cppking> maybe you don't need to install another mongo, just start mongs with correct configured options like --dbpath and --shardingRole
[09:49:21] <cppking> but I didn't try it , so I recommand you to install another mongodb server
[09:51:46] <Iskandar> ok
[10:20:10] <Mindiell> what is the purpose of databases in mongodb ?
[10:20:40] <Mindiell> is it the same thing to have two collections in one database and two databases with one collection in each ?
[10:20:50] <Mindiell> I mean, structurally, not functionnaly
[10:33:38] <Iskandar> i have made this script: http://pastebin.com/THmUJj1E when i run mongos --config /etc/mongos.conf the mongos runs fine and dandy, but when running service mongs start, i get a fail. The mongos.log leaves no trace for me
[10:33:55] <Iskandar> so i think the script it not correct, but i dont see it
[10:35:56] <sabrehagen> cppking: i've tried the verbosity modes, none show the cursor type
[10:36:47] <cppking> sabrehagen: well, I don't know same as you
[10:37:13] <sabrehagen> okay, thanks
[10:37:28] <cppking> Iskandar: so you must have a wrong configuration file
[10:49:19] <pamp> Im running a repair database, which are the consequences of stop the process?
[11:14:34] <Iskandar> http://pastebin.com/cnJkA8KA here is my output using the same /etc/mongos.conf
[13:19:18] <CustosLimen> hi
[13:19:27] <CustosLimen> so from 2.2 to 2.4 there is some metadata changes
[13:19:39] <CustosLimen> how long does it take for all the metadata to be upgraded ?
[13:54:23] <kexmex> new mongo c# driver is pretty bad imho
[13:56:53] <deathanchor> CustosLimen: sharded cluster?
[13:59:01] <CustosLimen> deathanchor, yeah - but nvm
[13:59:26] <deathanchor> CustosLimen: depends how big the data is on your configsvr, but usually a minute
[13:59:41] <deathanchor> doesn't affect anything while it goes on, cause you should have your balancer off
[14:04:10] <CustosLimen> deathanchor, I was actually not clear on what exactly was effected - when I realized its only config server data I stopped worrying
[14:10:38] <ramnes> jow jow
[14:11:10] <ramnes> does anyone know if there is a way to tell PyMongo to give the hand back directly when doing a collection.create_index()?
[14:11:21] <ramnes> I tried w=0 but it doesn't change anything and is still blocking :(
[14:11:50] <ramnes> (and background only affects the op server-side)
[14:26:10] <Iskandar> if i use rs.addshard(), will this be persisted so when i reboot the server, this mongo instance will be part of the shard again?
[14:26:23] <cheeser> yes
[14:26:26] <Derick> yes
[14:26:28] <Derick> oh hai cheeser
[14:28:58] <cheeser> what up, yo?
[14:30:40] <Iskandar> thanks :)
[14:31:12] <yopp> hum
[14:31:22] <pamp> Hi
[14:31:32] <pamp> I started a repair database
[14:31:40] <pamp> but its take to much time
[14:31:58] <yopp> will aggregation framework block something, when result is written to the output collection?
[14:32:07] <pamp> what is the consequences of stopping the process?
[14:33:19] <cheeser> pamp: indeterminate state
[14:33:36] <cheeser> yopp: it'll take a write lock
[14:35:26] <pamp> its a standalone server.. I will have problems if I stop the process before completion?
[14:35:54] <yopp> cheeser, on the output collection?
[14:35:55] <cheeser> probably
[14:36:05] <pamp> at this time the db have more than 1.5 TB
[14:36:11] <cheeser> yopp: yes. and there will be read locks, such as they are, on the input collection
[14:39:05] <yopp> cheeser, so pretty much the same as with map/reduce with output to the collection?
[14:39:37] <cheeser> same as any write, really
[14:40:11] <yopp> is there any difference on WT?
[14:40:44] <cheeser> document level locks
[14:45:39] <yopp> cheeser, to simplify. I've got few sharded collections for pre-aggregated data (raw data, 1h, 1d, 1y & etc). Aggregation is made step by step, from raw to 1y collection. Is there any benefits in using aggregation framework instead of map/reduce in this case?
[14:48:10] <cheeser> aggregation will be more efficient OOB since it can work with the bson directly.
[14:48:21] <cheeser> map/reduce will involve conversions to json
[14:48:32] <cheeser> and is single threaded
[14:48:42] <cheeser> aggregations can be done across the shards
[14:49:43] <yopp> So, if right now with map/reduce we're in fact running only one map/reduce at the time, with aggregation they can run simultaneously? With document-level locking, only affected documents will be locked, right?
[14:49:54] <yopp> on both: in and out collection?
[14:50:13] <cheeser> internally, aggregations are more parallelizable than map/reduce
[14:52:03] <yopp> Apart of having 100mb limit, is there any other "catches" with aggregation framework?
[15:07:32] <repxxl> hello, i have 2 fields like "name" : "This is a headline", "tags": ["School", "5B", "2015"] i wanna search by name and the same time also search array values and return the matching with a priority to show tag founds first then name
[15:08:15] <repxxl> what do i need exactly ? i was thinking something like regex "name" $or and $in together
[15:14:24] <repxxl> :/
[15:17:29] <cheeser> ok. so what's the question?
[15:24:04] <deathanchor> yopp: you still ahve the 16mb limit on the results unless you use a cursor
[15:27:14] <repxxl> cheeser i wanna search for single field or array values giving a priority to return first array values in mongo cursor
[15:30:29] <cheeser> you dont' necessarily need regex for that.
[15:36:27] <ruphin> Hi, I have some questions about upgrading my config servers to 3.2. I'd like to use the new config server setup with replica sets, since I am having issues with config servers running out of sync. I haven't found a manual anywhere.
[15:37:05] <cheeser> i'm not sure those have been published yet.
[15:38:59] <ruphin> Can anyone here give me a few quick pointers on how to migrate? I have some experience working with mongo, just need to know what the basic strategy would be
[15:39:55] <ruphin> I'm thinking it would probably be easiest to isolate a single config server in my current setup, restart it with 3.2 and configure it as a replica set member, then add new empty servers to that replica set
[15:40:34] <ruphin> But I'm not sure if I can simply start a 3.2 config server on a 3.0 database
[15:40:51] <ruphin> If someone can confirm that this is safe, I can go ahead with the above procedure
[15:42:38] <repxxl> cheeser i now have something like this http://pastebin.com/KEv7qK8V
[15:43:18] <repxxl> cheeser but its not working for me can you maybe tell me what is my problem ? or how to do it right ty
[15:44:03] <cheeser> looks okish to me
[15:47:04] <repxxl> cheeser why ?
[15:48:08] <repxxl> cheeser okish means okey ?
[15:48:35] <repxxl> cheeser i have to decode ur encrypted lanugage first to get you :D
[15:52:10] <diegoaguilar> Hello, how can I install mongodb 3.2
[15:52:21] <diegoaguilar> can't find instructions for that branch
[15:53:23] <cheeser> diegoaguilar: https://www.mongodb.org/downloads#development
[15:53:28] <cheeser> repxxl: yes, it looks fine to me.
[15:53:45] <repxxl> cheeser ok however it works how i can sort now the find query to get the matched "tags" first instead of "name" matches
[15:53:52] <diegoaguilar> cheeser, is 3.2 still in rc phase??
[15:53:54] <repxxl> cheeser can i control this in monog ?
[15:53:59] <cheeser> diegoaguilar: yes
[15:54:39] <diegoaguilar> cheeser, know when it will be out as stable release?
[15:55:15] <repxxl> cheeser the appropirate documents i mean so i get a list of all found "name" and "tags" but get the tags found documents first in the cursor
[15:58:31] <cheeser> diegoaguilar: i do.
[15:59:13] <diegoaguilar> cheeser, so when is that
[16:01:08] <cheeser> i can't say ;)
[16:01:15] <diegoaguilar> lol
[16:01:20] <diegoaguilar> why
[16:01:21] <diegoaguilar> :D
[16:01:56] <cheeser> because that's not public info yet and subject to change
[16:06:32] <x1nux> Hello ! I have a mongo DB 2.6.7, and my hard drive is full, I deleted some records, but the disk size didn't go down, is there a way to delete those records to reduce disk size ?
[16:08:06] <cheeser> db.repairDatabase()
[16:08:16] <cheeser> but it'll lock while that runs
[16:08:25] <x1nux> umm ok
[16:08:28] <xhip> quick question: since my apt-get dont want me to update my mongo.. how can I do it manual?
[16:08:54] <vfuse> repxxl: use map reduce
[16:10:50] <cheeser> hardly ever the right answer
[16:11:24] <vfuse> depending on your dataset will probably be slow
[16:13:32] <x1nux> cheeser, but this command "db.repairDatabase()" needs free space, in harddisk or not ?
[16:13:54] <cheeser> oh, that's true. yeah. you'll need 2x space. is this a replSet?
[16:14:04] <x1nux> yes
[16:14:16] <x1nux> oh my ..
[16:14:30] <x1nux> i have a problem ..
[16:14:37] <x1nux> no space
[16:14:43] <cheeser> all your nodes are out of space? :D
[16:14:57] <x1nux> yes !
[16:15:08] <cheeser> EC2?
[16:15:43] <x1nux> nop
[16:15:47] <x1nux> local installation ..
[16:16:08] <cheeser> can you spin up a new node?
[16:16:15] <x1nux> yes
[16:16:21] <x1nux> this is the solution ..
[16:16:39] <cheeser> bring up a new replset member with more space. the intial sync will end up using less space anyway.
[16:16:42] <x1nux> new node .. Resync .. and .. resync the others nodes ..
[16:16:47] <cheeser> yep
[16:16:52] <x1nux> thks !
[16:16:54] <cheeser> np
[17:27:27] <Jacoby6000> If i have a collection with a.b.c.foo and d.e.f.foo, how can I set all keys "foo" to a value?
[17:27:47] <Jacoby6000> basically I want to set a key to a certain value, regardless of where it is in the tree
[17:35:10] <symbol> Jacoby6000: As far as I know, that isn't possible within Mongo. You'd need to handle that server side or structure your data in a different manner.
[18:02:56] <dddh> hm
[18:03:02] <dddh> mongodb 3.2 announced today ..
[18:08:22] <symbol> I'm excited to play around with it.
[18:16:25] <dddh> symbol: time to upgrade?
[18:16:55] <symbol> I probably will but I'm not running a critical site.
[18:17:13] <dddh> rolling upgrades?
[18:20:35] <dddh> symbol: already using wiredtiger?
[18:21:04] <symbol> Not sure what you mean by rolling upgrades. I'm not currently using wiredtiger. Probably should be though.
[18:26:22] <dddh> hm, so the main new feature is join?
[18:37:21] <cheeser> one of many
[18:43:48] <dddh> partial indexes, validation and some olap
[18:44:21] <dddh> .. spidermonkey