PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 1st of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:41:09] <aps> I locked my RS secondary using fsyncLock and now I'm unable to Unlock it. db.fsyncUnlock is just stuck. What to do? :/
[06:41:44] <joannac> same shell?
[06:42:01] <aps> joannac: no, I had used --eval
[06:42:08] <joannac> also, db.fsyncUnlock(), right?
[06:42:19] <aps> joannac: yes
[06:42:22] <joannac> aps: that makes no sense. What's the full line?
[06:43:41] <stekla> hi guys. i tried to restore my mongo db with mongorestore, but i got an error: Failed: error connecting to db server: no reachable servers. How i can fix this?
[06:43:52] <joannac> start the server?
[06:43:57] <aps> joannac: I locked it doing this on terminal-> mongo --eval "db.fsyncLock()"
[06:43:57] <aps> now, doing mongo --eval "db.fsyncUnlock()" does nothing, even tried db.fsyncUnlock() by going in mongo shell
[06:44:05] <joannac> or specifiy the right hostname and/or port
[06:44:13] <joannac> aps: check the logs on the secondary
[06:45:18] <aps> checking...
[06:46:14] <stekla> seems like server already running, beacuse i can use mongo console.
[06:49:35] <stekla> mongodb service running on 27017 port, and mongorestore uses 27017 by default.
[06:52:23] <aps> joannac: this is what I get, then it's stuck https://www.irccloud.com/pastebin/JjQJdHAD/
[06:53:18] <aps> joannac: it's not the same shell actually, I had to re-login. Could that be a problem?
[06:53:34] <joannac> stekla: pastebin the whole mongorestore line
[06:54:01] <joannac> aps: "re-login"?
[06:54:43] <aps> joannac: err, I mean I SSHed again
[06:54:46] <joannac> aps: fsyncLock blocks all writes. writes takes precendent over reads. one write when fsyncLocked means no reads will occur. that includes auth
[06:55:08] <aps> joannac: auth is disabled on this mongo
[06:55:19] <joannac> aps: then I need more logs.
[06:55:35] <joannac> specifically, from the time you locked, to present
[06:56:32] <stekla> http://pastebin.com/m0HLSqQn
[06:56:36] <aps> let me check
[06:57:36] <joannac> stekla: please follow the syntax in the docs http://docs.mongodb.org/manual/reference/program/mongorestore/
[06:58:03] <joannac> stekla: what you want is mongorestore localhost:27017 -d my_dump my_dump
[06:58:25] <joannac> assuming you want to restore to a db called "my_dump" and the actual dumpfiles are in the current directory in a folder called "my_dump"
[07:01:10] <stekla> http://pastebin.com/aY1PnG2p
[07:02:54] <stekla> but i used mongorestore my_dump -d my_dump command on another computer and it worked pretty well.
[07:05:14] <joannac> oops, sorry. mongorestore -h localhost:27017 ,,,,
[07:14:08] <aps> joannac: http://pastebin.com/iVzwepHB
[07:14:45] <aps> it's trying to remove old journal files that are somehow missing
[07:16:56] <joannac> aps: are you sure it's still locked?
[07:17:15] <joannac> aps: if you try and do a write, what happens?
[07:17:53] <aps> joannac: db.currentOp() says https://www.irccloud.com/pastebin/UffDFJBx/
[07:20:12] <stekla> i got the same error http://pastebin.com/RjN27KND
[07:20:59] <joannac> stekla: at this stage, I want proof that this mongod actually exists
[07:21:07] <joannac> stekla: mongo shell output?
[07:24:02] <joannac> aps: what version?
[07:24:24] <aps> joannac: 2.6.11
[07:24:55] <stekla> http://pastebin.com/fc371wHL
[07:25:36] <joannac> stekla: mongo shell output from connecting please
[07:26:13] <stekla> http://pastebin.com/Kfq4M2iJ
[07:28:06] <joannac> that's bizarre
[07:29:16] <joannac> stekla: mongo --host localhost:27017 ?
[07:30:30] <stekla> http://pastebin.com/MGg6Njjg
[07:30:50] <joannac> doubly bizarre
[07:31:43] <joannac> stekla: mongorestore --version ?
[07:31:52] <tagrudev> hello guys I am having this issue with mongo and a request for removing a string from a nested array
[07:31:54] <tagrudev> https://gist.github.com/tagrudev/293f69e20fe7a17ca436
[07:32:22] <stekla> http://pastebin.com/UbhE8L7G
[07:32:48] <joannac> stekla: right. why are you running 3.1.7 mongorestore?
[07:32:57] <joannac> can you run the same version as your server?
[07:33:37] <Boomtime> tagrudev: your use of $ positional operator has no match in the query portion, you matched on an exact match of _id - there is no array position for $ to represent
[07:34:47] <aps> joannac: should I just restart the mongo instance?
[07:34:49] <tagrudev> Boomtime, the json is just a single record from the collection
[07:35:06] <tagrudev> which is selected by _id: id
[07:35:34] <Boomtime> what do you expect $ to do?
[07:36:27] <Boomtime> tagrudev: if you expect to match _id inside the modifications array then your query should be {"modifications._id":id}
[07:36:28] <tagrudev> Boomtime, look in all available modifications
[07:36:57] <Boomtime> $ means to locate the item that matched the query - it is not magic
[07:36:58] <tagrudev> nah I want to remove a string from the users
[07:37:17] <tagrudev> and I am wondering if there's a way to do so
[07:37:28] <Boomtime> if you match on the top-level _id exactly, then $ has no meaning
[07:38:30] <tagrudev> I see hmm is there a way to acomplish what I am looking for ?
[07:38:41] <Boomtime> you need to include the user match in the query portion so the $ has something to reference
[07:39:49] <tagrudev> wouldn't it match the userId
[07:40:48] <Boomtime> to be honest, i think you'd be better off doing the modification on the client - the result is potentially complex - either that or you would need to run the update multiple times to ensure it hit every match
[07:41:26] <Boomtime> an update with a predicate only matches once - it doesn't keep matching and applying further updates to the same document - it makes exactly one change
[07:41:39] <tagrudev> one change is what I need
[07:41:40] <tagrudev> :)
[08:28:35] <alaee> Hi guys, is a result of .explain('executionStats) "executionTimeMillisEstimate" : 0 something good?
[08:28:37] <alaee> How should I know I'm doing my queries the right way?
[09:52:10] <aps> Suppose all (two) mongo members in a RS are down for sometime, then they come back but one is in "RECOVERING" state and other is "SECONDARY". How do I make one of them primary?
[09:52:33] <Derick> they'll become primary on their own after recovery has finished
[09:54:15] <aps> Derick: it seems to be stuck in RECOVERING state for quite some time now. How do I make sure there is progress in recovery process?
[09:54:28] <Derick> check the log - it should tell you
[09:54:59] <aps> okay, there's no primary left right now. Does a member still recover when there is no primary?
[09:56:47] <aps> I see lots of these - [rsHealthPoll] replset info x.x.x.x:2701 heartbeat failed, retrying - in the log @Derick
[09:56:55] <Derick> yes, from the secondary
[09:57:00] <Derick> can the nodes talk to each other?
[09:57:36] <aps> Derick: yes
[09:58:20] <Derick> which IP is being shown? is that the IP of the other node? and is the port really 2701?
[10:00:02] <aps> Derick: yes, both IP and port are correct. I think the issue is that I was trying to add auth and I have screwed it somehow
[10:12:28] <Derick> aps: that's possible :-/
[10:14:55] <aps> Derick: yeah, that was the issue. Related question, is it possible to enable auth for just one collection and not on db level?
[10:16:25] <Derick> no
[11:22:50] <yopp> Uh.
[11:23:09] <yopp> 3.0.6 + WT = major performance degradation over time :|
[11:55:38] <cheeser> yopp: filed a ticket?
[11:55:45] <yopp> yep
[11:56:01] <yopp> https://jira.mongodb.org/browse/SERVER-20149
[11:57:12] <yopp> By the way, stupid question about index
[11:57:41] <yopp> We're got a lot of indexes on our collections (almost the same size as the data)
[11:58:16] <yopp> And working set can't fit in the memory, but we're not using all the data on most popular app endpoint, only relatively recent
[12:00:15] <yopp> As far I understand, in this case mongo still be forced to go to the disk to crawl over the full index, right?
[12:01:26] <yopp> Or if the required part of the tree in the memory, it will not?
[12:02:01] <yopp> I mean, in the best case scenario, did index lookup require all the index, or just a part?
[12:05:27] <aps> how do I do opposite of rs.initiate? I.E. I want to remove replica set config from mongo member and make it standalone
[12:06:17] <yopp> aps, just start the member without replSet
[12:07:28] <aps> yopp: actually, I want to add this to another replica set with same name
[12:08:40] <yopp> so you need to change ips of known replica set members?
[12:08:53] <yopp> and you want to keep the current data?
[12:09:02] <aps> yes
[12:09:21] <aps> I believe I need to reconfigure
[12:09:41] <yopp> http://docs.mongodb.org/manual/tutorial/change-hostnames-in-a-replica-set/
[12:10:34] <aps> it was some problem with auth, figured it out now. Thanks :)
[12:38:39] <arussel> what is the recommanded way to install mongo on ubuntu 15.04 ? using the ubuntu package or mongo 14.04 package ?
[12:40:47] <StephenLynx> not installing, afaik.
[12:40:50] <StephenLynx> it isn't supported.
[12:41:02] <StephenLynx> are you deploying on a server?
[12:41:11] <StephenLynx> or on a dev machine?
[12:51:30] <deathanchor> always use the packages from the mongodb site, I don't trust the repos to keep up to date.
[12:51:50] <StephenLynx> yeah, but the packages from mongod don't support anything over 14.04 I guess
[13:03:24] <arussel> StephenLynx: it is just my dev machine
[13:03:50] <StephenLynx> I suggest you install a VM with centOS
[14:39:55] <KekSi> any word on the java driver splitting in 2.x and 3.x? is 2.x going to be developed in the future or does it run out soon?
[14:50:11] <fartface> I'm trying to model an ice-hockey team into a MongoDB database, but I'm a bit confused as to how to access nested properties--I've got a pastebin up at http://pastebin.com/3bBL9abf, does anyone know what the best way to handle this would be?
[14:51:56] <StephenLynx> 'super.sub'
[14:52:49] <StephenLynx> oh
[14:52:58] <StephenLynx> fartface you have already retrieved the player?
[14:53:01] <StephenLynx> player.seasons
[14:53:50] <StephenLynx> the problem with that insert is that you are not setting the seasons array for the player before inserting
[14:53:57] <StephenLynx> ah, hold you
[14:54:00] <StephenLynx> you are
[14:54:00] <StephenLynx> nvm
[14:54:23] <StephenLynx> you could have a separate collection for seasons, you know
[14:54:41] <StephenLynx> and have a field with the player unique
[14:54:44] <StephenLynx> identifier
[14:54:49] <StephenLynx> be it it's name or id.
[14:55:31] <fartface> Would that make more sense? I had originally thought about doing it that way, but then in reading around, it sounded like it was more advisable to have it embedded in the related part (the player), but if it's totally fine to have a "Seasons" collection, then I can just do that and insert the player ID
[14:55:32] <StephenLynx> it wouldn't make that much of a difference, either
[14:55:46] <StephenLynx> that depends on how you wish to read the data.
[14:56:03] <fartface> So then I could call it like Seasons.findOne({name: '2015 - 2016', player: (some ID)})
[14:56:19] <StephenLynx> yeah, but you could also just find the player and read its seasons.
[14:56:36] <StephenLynx> in your case, I don't see much of a difference, since there won't be many seasons per player
[14:56:48] <Pinkamena_D> If you don't embed you will have to usually query twice, which looks worse in the code but I have found that there is not much of a performance impact.
[14:57:01] <StephenLynx> yeah, if you have to just query ONCE more
[14:57:03] <StephenLynx> is not an issue.
[14:57:30] <StephenLynx> when you have to query N more times, then you really should embed.
[14:57:45] <StephenLynx> having it on a separate collection will impose the following limitation on you:
[14:57:54] <fartface> I wasn't sure whether it'd make more sense to do a second query or to do a loop to get the right season. A second query felt "cleaner", but wasn't sure of performance impact
[14:58:03] <StephenLynx> when you are listing players, it won't work so well if you wish to list them along with season data.
[14:58:26] <StephenLynx> because you will have to "join" the data in application code.
[14:58:40] <fartface> Ah, and that I do. I want to have a "roster" that'll output each players information with their latest season stats
[14:58:46] <StephenLynx> hm
[14:59:09] <StephenLynx> I suggest you embed this latest season stats and create a separate collection for past season data.
[14:59:12] <StephenLynx> that is what I would do.
[14:59:47] <StephenLynx> honestly, this whole system suggests a relational database better.
[15:00:01] <StephenLynx> but doesn't seem that bad to use mongo on it either.
[15:00:27] <StephenLynx> you will just have to make compromises and cope with limitations.
[15:01:02] <fartface> Actually that's a pretty good idea. I'll have to think a bit more about how to handle "seasons", in that, let's say that a player plays this season, but not the next. When I create a new season, I guess I'll have some application logic to go through each of the players and push the current season data to an 'archive_season' collection and then null out the 'current' season data
[15:01:33] <StephenLynx> I do something like that on lynxchan
[15:01:53] <StephenLynx> I have a collection with hourly stats and I replicate the hourly stat at given times to the board document.
[15:02:07] <fartface> So that when I go to pull up the current roster, it wouldn't pull up players that played 3 seasons ago and never again or something like that.
[15:02:13] <StephenLynx> yeah
[15:02:17] <fartface> Cool, I'll do that then, thanks!
[15:02:19] <StephenLynx> np
[15:14:12] <brianV> Hi all. I'm trying to project hardware requirements for a Mongo deployment. We have about 120gb of data, and we are planning on going to Amazon EC2 and use the Mongo Cloud Manager
[15:14:33] <brianV> what type of instances are normally recommended for Mongo? Better to look at multiple small instances, or one large instance?
[15:14:48] <brianV> What kind of memory requirements are there?
[15:15:46] <StephenLynx> I don't suggest you use amazon
[15:15:53] <StephenLynx> it is much more expensive
[15:17:36] <brianV> StephenLynx: what would you recommend? The rest of our infrastructure is on EC2 already, so we benefit from good internal latency
[15:17:43] <StephenLynx> welp
[15:17:47] <StephenLynx> https://www.ovh.com/us/dedicated-servers/storage/
[15:18:20] <StephenLynx> or colocation
[15:19:10] <StephenLynx> the thing is, these services are usually not worth it.
[15:19:13] <macwinner> OVH has been good for me so far.. and ultracheap
[15:19:38] <brianV> hmm, I don't think we want the extra latency between our app servers and Mongo
[15:19:42] <macwinner> I also use Singlehop as a secondary provider.. I like OVH more though
[15:21:03] <StephenLynx> I think I will migrate from linode to OVH
[15:21:16] <StephenLynx> about a 1/3 of the price for the lowest server.
[15:22:21] <StephenLynx> funny, the price on the listing is 3.49 but when you order is 4.19 usd
[15:23:51] <cq_dev> Hello, is it known whether or not Mongo is going to add "$hint" to the aggregation framework?
[15:24:32] <cq_dev> I have an issue whether I'm forced to using "$geoNear" which forces me to use one geospatial index for the collection.
[15:26:28] <leev> will a dump taken with mongodump on 2.4 restore fine on a server running 3.0?
[15:28:08] <yopp> leev, yep
[15:28:42] <StephenLynx> god damn it, I can't use fake data on OVH
[15:28:45] <StephenLynx> assholes
[15:29:01] <leev> thanks yopp
[15:29:50] <yopp> afaik there only 2.2 and earlier incompatible with 2.4+
[15:30:20] <StephenLynx> oh, I thin I can
[15:30:29] <StephenLynx> no I can't :v :v
[15:32:40] <yopp> brianV, for 120G of data you need cluster with 120G of memory ;)
[15:32:45] <yopp> + indexes
[15:33:18] <brianV> yopp: lol, I've priced that out
[15:33:48] <yopp> but generally you need to be sure that your indexes and most used data can fit in memory
[15:33:51] <brianV> yopp: reasonably, I think we can look at ~45gb ram and 196gb of SSD space
[15:34:09] <brianV> that should (hopefully) fit that criteria
[15:34:47] <yopp> My advice: try to push the data with your real indexes in the mongodb and then try to query it, to decide on indexes
[15:35:04] <yopp> then you will see how much it takes
[15:35:16] <yopp> if you are risky, you can try your luck with WT
[15:36:00] <yopp> in this case you need more CPU, but less storage and memory
[15:36:31] <yopp> Also, with that data set, you're better make a shard from the day 1
[15:36:45] <brianV> yopp: that's an approach
[15:36:52] <brianV> yopp: oh yeah, we'll be sharding
[15:37:12] <brianV> we have like 500mb of 'static' data, then everything else is records keyed by users, very shardable by userID
[17:03:48] <leev> how can I force a balancer release?
[17:05:18] <leev> I set the wrong shard key on a collection, so tried to stop the balancer and drop the collection, but it has a lock from one of the mongos instances.
[17:05:54] <saml> i have mongo01,02,03, 01 is primary. how can I change primary to 03?
[17:09:19] <saml> http://docs.mongodb.org/master/tutorial/force-member-to-be-primary/
[17:09:38] <saml> someone should've written GUI or script to select a node as primary
[17:29:36] <saml> so mongodump is bad
[17:29:49] <saml> don't run mongodump, it brings site down
[17:30:12] <saml> run it on stand by secondary replica that nothing uses
[17:34:01] <cheeser> mongodump is not bad.
[17:34:08] <cheeser> what problems are you seeing?
[17:35:14] <StephenLynx> I think it locked the db in use
[17:41:59] <cheeser> that'd be my guess. but given how documented that is, calling it "bad" seems ... bad.
[17:42:02] <cheeser> :D
[17:42:15] <StephenLynx> yeah
[17:42:18] <StephenLynx> I think he misused it.
[19:15:31] <diegoaguilar> hello, can someone help me with this pagination issue? http://stackoverflow.com/questions/32339144
[19:19:18] <StephenLynx> diegoaquilar ids are not sequential
[19:19:34] <StephenLynx> _id: {$gt: ObjectId("55dde593827ff4e65b6847d3")} makes no sense
[19:20:37] <diegoaguilar> Ok, I thought so after this: http://blog.mongodirector.com/fast-paging-with-mongodb/
[19:20:48] <diegoaguilar> how should I get what I want StephenLynx
[19:22:27] <deathanchor> note to self: don't create collections that start with a number.
[19:22:45] <diegoaguilar> with a number?
[19:23:01] <deathanchor> yeah
[19:25:21] <diegoaguilar> dethanchor I thought on db.lineuppointsrecord.find({round: 0, ranking: {$gt: 6}}).sort({ranking: 1}).limit(3)
[19:25:55] <diegoaguilar> can I get performance optimization with a composed {round: 1, ranking: 1} index?
[19:36:26] <deathanchor> sure why not.
[20:15:25] <deathanchor> can I use a URI to connect mongo via javascript and using mongo binary/
[20:15:26] <deathanchor> ?
[20:17:00] <deathanchor> I'm getting an assert error with this: var dbo = connect("mongodb://localhost:27017/users?readPreference=secondary");
[20:39:00] <BaNzounet> Hey guys, I've a collection of 1M records, I want to rename a field let's say fooId : { $oid: "aabbcc" } to barId: { $oid : "aabbcc" } what do you advice? Can I use $rename?
[20:41:11] <cheeser> nope
[20:41:21] <cheeser> you'll have to load, modify, save each document.
[20:42:46] <deathanchor> why won't rename work?
[20:51:17] <retran> db.example.update({},{$rename:{'oldname':'newname'}})
[20:51:31] <retran> tias
[20:53:03] <cheeser> is that not an agg pipeline expression?
[20:53:43] <cheeser> nope. fair enough.
[20:57:05] <retran> of course, you'd want {multi:true} option
[20:57:23] <cheeser> haha. yeah.
[20:58:14] <retran> https://gist.github.com/acksponies/4234700372873cc1040e
[22:07:35] <MacWinne_> joannac, still waiitng on WT fixes before rolling into production?
[23:27:09] <joannac> MacWinne_: ?
[23:27:40] <MacWinne_> joannac, just periodically checking with you if wired tiger is mature enough for your production use..