PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 20th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:29:09] <Matadoer> anyone here familiar with rails/mongoid
[00:29:23] <cirwin> yes
[00:29:38] <Matadoer> i tried running my rails server after installing mongoid
[00:29:46] <Matadoer> and it keeps throwing parse errors for the mongoid.yml
[00:29:52] <Matadoer> on the hosts: part
[00:30:06] <cirwin> can you gist it?
[00:30:19] <Matadoer> yep
[00:31:08] <Matadoer> hmm
[00:31:13] <Matadoer> it didn't output to a log file
[00:31:20] <cirwin> gist the file
[00:31:20] <Matadoer> can I just write what it says here?
[00:31:37] <Matadoer> oh gist the config?
[00:32:11] <cirwin> yeah
[00:36:35] <Matadoer> ffs
[00:36:47] <Matadoer> I don't know why these spaces / red lines are there
[00:36:48] <Matadoer> https://gist.github.com/anonymous/c908f71af95b1926fa1d
[00:36:57] <Matadoer> they aren't there in my code editor
[00:38:06] <Matadoer> I replaced some info with x's
[00:38:27] <stefandxm> the red character is simply a tab what i can see
[00:38:37] <cirwin> Matadoer: that's your problem
[00:38:42] <cirwin> get rid of the tabs
[00:38:50] <cirwin> (and ideally set your editor to not generate them for yml)
[00:39:03] <cirwin> the - line needs to be indented more than the hosts: line
[00:39:40] <Matadoer> how many spaces more
[00:39:43] <Matadoer> ?
[00:39:49] <cirwin> two is conventional
[00:40:19] <stefandxm> this is terrible
[00:40:28] <stefandxm> how could we let this have happen
[00:40:38] <stefandxm> grr
[00:40:42] <Matadoer> it's still yelling at me
[00:40:45] <Matadoer> :/
[00:46:21] <Matadoer> the exact error is "found character that cannot start any token while scanning for the next token at line 8 column 1"
[01:54:55] <SubCreative> Anyone here have suggestions for listening to mongodb changes in a collection via node.js?
[01:55:18] <SubCreative> I want to fire an event on a change to a record
[03:31:34] <edrocks> what roles should you give your app user for a production server?
[03:52:07] <d0tn3t> i need help,
[03:52:39] <Boomtime> you've taken the first step
[03:53:05] <d0tn3t> when i create user, mongodb say: "error could;nt not add user"
[03:53:15] <d0tn3t> and database admin is empty
[03:53:18] <d0tn3t> why???
[03:53:33] <Boomtime> how do you create the user
[03:53:47] <Boomtime> also, what version of mongodb
[03:54:16] <d0tn3t> db.createUser({user:"admin",pwd:"123",roles:[{role:"root",db:"admin"}]})
[03:54:29] <d0tn3t> my version: 2.6.4
[03:56:31] <Boomtime> can you paste the exact error message
[03:58:38] <d0tn3t> Error couldn't add user: not master at src/mongo/shell/db.js:1004
[03:59:15] <Boomtime> what did you connect to?
[03:59:30] <Boomtime> is this a replica-set?
[03:59:48] <d0tn3t> yes
[03:59:58] <Boomtime> are you connected to the primary?
[04:00:04] <d0tn3t> no
[04:00:18] <d0tn3t> my primary cant connect to this host
[04:00:27] <d0tn3t> this host error??
[04:00:52] <d0tn3t> why my database admin is empty?
[04:01:44] <Boomtime> only the primary can have anything written to it, secondaries are read-only, they replicate from the primary
[04:03:05] <d0tn3t> when i add host to Primary
[04:03:14] <d0tn3t> i receiced this "stateStr" : "UNKNOWN",
[04:03:35] <d0tn3t> i think my primary not connect to secondary
[04:03:45] <joannac> okay. so check?
[04:03:56] <joannac> on the primary machine: mongo secondaryhost:port
[04:05:14] <d0tn3t> this connect to test
[04:05:59] <joannac> it connects successfully?
[04:06:10] <d0tn3t> yes
[04:06:19] <joannac> pastebin your rs.status()
[04:06:32] <joannac> from the primary
[04:06:40] <d0tn3t> ok
[04:07:45] <d0tn3t> http://pastebin.com/ezZ6q2BJ
[04:08:30] <SubCreative> If anyone was curious, I found a somewhat viable solution for my need to listen to mongodb events.
[04:08:31] <SubCreative> http://blog.mongodb.org/post/29495793738/pub-sub-with-mongodb
[04:09:18] <SubCreative> It will listen for new changes to a collection and allow you do do whatever with the change.
[04:09:24] <SubCreative> In Node.js btw
[04:09:49] <SubCreative> Nice project example to go with it here: https://gist.github.com/scttnlsn/3210919
[04:10:25] <d0tn3t> anyone help me
[04:10:27] <d0tn3t> ????
[04:10:38] <d0tn3t> my boss very hard
[04:11:39] <SubCreative> http://docs.mongodb.org/manual/tutorial/add-admin-user/
[04:11:44] <Boomtime> your primary's name is:
[04:11:45] <Boomtime> "name" : "localhost.localdomain:27017",
[04:11:56] <Boomtime> how will the secondary possibly connect to that?
[04:13:24] <Boomtime> the primary can see the secondary because it can use "192.168.100.7:27017" to get to it, but when the secondary tries to route back to the primary, it tries to find "localhost.localdomain:27017" which will lead it back to itself...
[04:13:28] <Boomtime> hilarity ensues
[04:14:17] <Boomtime> update the config to use FQDNs only or (if you must) IP addresses only
[04:17:24] <d0tn3t> tks bro
[04:17:25] <d0tn3t> :D
[04:29:30] <d0tn3t> why mongodb cant start after server reboot
[04:48:51] <jchia> The following two pymongo queries give the same result but the second one is much much faster. How come?
[04:48:51] <jchia> found = set(map(lambda doc: doc['_id'], collection.find({'$or': [{'_id': k} for k in self._keys]}, {'value': 0})))
[04:48:51] <jchia> found = set(map(lambda doc: doc['_id'], collection.find({'_id': {'$in': self._keys}}, {'value': 0})))
[04:55:00] <Boomtime> the first one multiple parallel queries for ($or) each for any number of matching items (though each will always be zero or one) which are then collated into a single result set at the end
[04:55:42] <Boomtime> the second one is a single combined query on a single index with a single plan matching any single value from a list, the results are piped directly
[05:03:21] <jchia> Boomtime: So for the first one, the client-side will actually issue multiple queries?
[05:04:56] <Boomtime> no, the server will break out each $or and run them in parallel
[05:05:07] <Boomtime> but the effect is nearly the same
[05:05:22] <Boomtime> $or should only be used for disparate clauses
[05:05:51] <Boomtime> the semantics will work for your use case but $in will be better
[05:09:07] <jchia> Boomtime: Thanks.
[07:45:34] <d0tn3t> my Primary host become Secondary
[07:45:37] <d0tn3t> why???
[07:45:57] <d0tn3t> how to make secondary to be primary
[08:32:13] <Ockonal> Hi guys, how can I sort the result of group aggregation with $push operator?
[08:32:21] <Ockonal> db.test.aggregate([ {$group : {_id : "$gid", posts: { $push : {likes: "$likes" } }}}, {$sort : {"posts.likes": 1} } ])
[08:32:27] <Ockonal> This does not sort
[08:36:00] <Katafalkas> Hey, How do you configure PyMongo to connect to mongos, but to read from secondaries ?
[08:39:33] <jerome-> Is there anything wrong with this url: mongodb://192.168.174.202:27018,192.168.174.202:27019,192.168.174.202:27020/?replicaSet=mammouth ?
[08:39:57] <jerome-> with mongo cli, I get this error: Assertion failure _setName.size()
[08:44:12] <Derick> jerome-: that URL looks fine
[08:44:26] <jerome-> mongo cli doesn't like it
[08:44:40] <Derick> no, I think it only accepts one host
[08:44:56] <Derick> there is a jira ticket to improve that
[08:51:02] <jerome-> I tried with one host, but I have the same result
[08:52:28] <Derick> jerome-: just the hostname and port, you can't use "mongodb://" with it
[08:52:46] <jerome-> yes it works that like
[08:56:11] <tyteen4a03> on the _id field: Is it possible to generate this yourself on the client side instead of having mongo generate it? it seems from some examples this is possible but is it legal to do so?
[09:00:44] <kali> tyteen4a03: yes.
[09:01:11] <kali> tyteen4a03: as a matter of fact, when you're not providing it, it's the driver that crafts it for you, not the server
[09:01:25] <tyteen4a03> kali, ah
[09:02:06] <kali> tyteen4a03: put anything "sensible", and mongodb will be fine: it can be an objectid, a string, an integer, even an sub-document
[09:02:16] <kali> tyteen4a03: just keep it reasonably small
[09:02:28] <tyteen4a03> kali, I was going to supply an uuid
[09:02:36] <kali> that's perfectly all right
[09:03:05] <kali> you may want to store it as a binary and not the hex string
[09:03:44] <kali> it will be more compact, and should avoid identity issues if someone decide to change the case at some point
[09:04:12] <Derick> kali: fun fact, if the driver doesn't supply an ObjectID, the server will make one
[09:34:21] <jordz> Does anyone know where I can find some indepth documentation of what mongos is doing? I'm looking for a little more info on what it's allocating in memory?
[09:35:10] <jordz> For instance, running 2 mongos instances, will they pretty much store the same information in memory or does it only pull out what it needs based on the queries it recieves?
[09:38:01] <ome> Anyone here worked with Go mgo? I can't find a way to use a mgo.DBRef in Query.
[09:38:54] <jordz> ome: Find ref?
[09:39:40] <ome> jordz: Not really, I want to do: `DB.C("books").Find(bson.M{"Author": mgo.DBRef{Collection: "authors", Id: id}}).All(Books)`
[09:39:54] <ome> Get all Books where they have that specific Author.
[09:40:07] <kali> Derick: better safe than sorry, i gues :)
[09:43:17] <jordz> ome: Does author look like author: { $ref: something, $id: 'foo'}?
[09:44:28] <jordz> On books?
[09:45:31] <ome> jordz: Yup, sure does.
[09:45:55] <ome> `{ "$ref" : "authors", "$id" : ObjectId("53f299e6f32963c78e8d74ed") }`
[09:52:15] <ome> Figured out.
[09:52:17] <ome> :)
[09:52:21] <jordz> ome: FindRef?
[09:53:00] <ome> jordz: Nope, just Find, for some reason the key for bson.M must be all lowrcase, regardless of the case in Document.
[09:53:06] <ome> author works, Author does not.
[09:53:13] <jordz> ome: Interesting
[09:53:14] <ome> Even though it is Author in the Document.
[09:53:25] <jordz> Good to know though
[09:53:51] <ome> Yup. Learned the hardway. haha.
[09:58:51] <jordz> ome: It's the best way :P just annoying when you're on a time limit
[09:59:09] <ome> jordz: I have hardly slept in the past 36 hours. :/
[10:01:32] <jordz> ome: :(
[11:35:59] <vrkid> Hi, we've installed MongoDB Enterprise and configured SSL, how can I test that SSL actually works?
[12:48:00] <ckozak_> Anyone have experience with the mongo-java-driver?
[12:50:38] <ckozak> Any way I can use a DBDecoder (or more specifically a BSONCallback) with DBCollection.aggregate?
[12:52:47] <cheeser> setDecoderFactory on the DBCursor
[12:56:15] <ckozak> cheeser: Ah, the cursor isn't created with the decoderfactory from the DBCollecitonImpl?
[12:59:15] <cheeser> it is. but you can override it before you start iterating that cursor
[13:00:51] <ckozak> cheeser: I don't think that's happening in practice, if I do collection.find(), my data has been modified by the callback, however when I use .aggregate, my factory/callback isn't used
[13:03:05] <cheeser> you called collection.setDBDecoderFactory()?
[13:03:56] <ckozak> cheeser: Yep, that's how it gets called with collection.find() :-/
[13:04:28] <cheeser> DBCollection passes that in to both DBCursor and QueryResultsIterator
[13:04:59] <ckozak> Yeah, I see where that should be happening, not really sure why it doesn't
[13:15:21] <tyteen4a03> are there tutorials on how to write motor code that will work with tornado tests? I made calls like I would normally do inside a tornado app but nothing gets written into the DB
[14:10:34] <Diplomat> Hey guys.. any ideas whats wrong when mongodb wont update log file when it crashes and there is pretty much no way to diganose whats going on..
[14:11:01] <Diplomat> i have tried restarting mongodb itself.. my vm.. but it doesn't still work
[14:11:14] <Diplomat> for some reason it worked yesterday but today when i booted my vm it fails again
[14:12:35] <Diplomat> oh.. mongo console itself works .D
[14:12:35] <Diplomat> :D
[14:12:59] <Diplomat> http://pastebin.com/E3Zp9JmU
[14:18:48] <Diplomat> alright.. i have no idea wtf i did
[14:18:51] <Diplomat> but it works now
[14:19:48] <Diplomat> http://pastebin.com/hN3ZdUSr
[14:31:00] <near77> hi
[14:31:13] <near77> anyone knows of a way to centralize management of mongo clusters?
[14:32:11] <cheeser> mms.mongodb.com
[14:34:08] <near77> not backups
[14:34:12] <near77> but user management etc
[14:34:22] <cheeser> mms is more than backups
[14:34:32] <cheeser> https://mms.mongodb.com/learn-more/automation
[14:35:04] <near77> and this would work to handle like 100 clusters?
[14:35:11] <cheeser> though user management doesn't seem to be in the mix
[14:35:13] <cheeser> sure
[14:35:29] <cheeser> well. a qualified "yes"
[14:35:30] <cheeser> :D
[14:35:44] <near77> mmm the thing is, I need it mostly for User management
[14:35:51] <near77> mmm and do you know of a tool
[14:36:02] <near77> that allows to do this but for mysql oracle and mongo at the same time?
[14:36:18] <cheeser> one tool for both?
[14:45:30] <near77> for the three of them lol
[14:45:37] <near77> so its easier to manage
[14:45:52] <cheeser> one tool is not going to cover them all
[15:00:00] <topwobble> I am seeing this mongoose error on our prod server: https://gist.github.com/objectiveSee/f33a97365ea5604fa954 We just ran and db update that involved changing a lot of document keys and dropping all indicies on one collection. Any ideas?
[15:00:10] <topwobble> ^^ this is all with mongoose
[15:19:26] <paulbeuk> hi, I could use some help with updating an object in an array
[15:19:34] <paulbeuk> db.projects.update( { "applications.url": { $regex:"^//.*thevoice-android.apk" } }, { $set: { "applications.$url" : "changeonlythisfiend" } } );
[15:20:01] <paulbeuk> db.projects.update( { "applications.url": { $regex:"^//.*thevoice-android.apk" } }, { $set: { "applications.$.url" : "changeonlythisfiend" } } );
[15:20:32] <paulbeuk> it replies Cannot apply the positional operator without a corresponding query field containing an array.
[15:21:03] <rspijker> is applications an array?
[15:21:11] <paulbeuk> the doc struture is
[15:21:13] <paulbeuk> { "_id" : ObjectId("53f30b7fe4b0ac63cbde3ab4"), "applications" : [ { "url" : "//cdn.playtotv.com/provisioning/U_b39a655cd054b2ff/25234569026996483/nbc-thevoice-android.apk" }, { "url" : "//cdn.playtotv.com/provisioning/U_f140df05d0a9f1df/25235055053165975/download.plist" }, { "url" : "//cdn.playtotv.com/provisioning/U_ed28fbdc51b0bc89/25235158099320920/download.plist" }, { "url" : "//cdn.playtotv.com/p
[15:21:44] <paulbeuk> yes: Applications is an array, containing objects
[15:21:55] <paulbeuk> yes: Applications is an array, containing key values objects
[15:22:33] <paulbeuk> I only want to update the matching array element offcourse
[15:24:18] <rspijker> well… that should work without problems...
[15:26:02] <paulbeuk> well, applications.url is not an array, it is a field in the array applications
[15:26:33] <rspijker> that’s fine
[15:26:39] <rspijker> only applications needs to be an array
[15:27:30] <rspijker> it’s due to the regex paulbeuk...
[15:27:48] <paulbeuk> ok, i test without it
[15:28:02] <rspijker> https://jira.mongodb.org/browse/SERVER-1155
[15:28:14] <rspijker> I’m assuming you are on 2.4.x and not on 2.6 yet?
[15:29:51] <paulbeuk> MongoDB shell version: 2.4.8
[15:29:54] <paulbeuk> true
[15:30:07] <rspijker> according to that issue it’s fixed in 2.6
[15:30:17] <paulbeuk> and, you are right about the regex
[15:30:26] <paulbeuk> it works without
[15:30:29] <rspijker> so, either use the workaround there, don’t use the regex at all, or upgrade to 2.6
[15:31:04] <paulbeuk> okay, thanks spijker!
[15:31:08] <paulbeuk> okay, thanks rspijker!
[15:31:17] <rspijker> np
[15:42:54] <Lope> I haven't used mongoDB in about a year, so I'm not up to date with the latest news.
[15:43:26] <Lope> If I do updates to a mongoDB collection is it going to lock the whole collection, or does mongoDB only lock documents?
[15:43:51] <cheeser> document level locks are coming in 2.8
[15:44:03] <cheeser> the write lock is still at the db level in 2.6
[15:45:22] <rspijker> there have been some improvements through yielding though
[15:45:31] <topwobble> reading the docs for `repairDatabase` doesn’t feel good
[15:46:11] <rspijker> topwobble: the caveats there apply mainly for a means of repair. As in, it shouldn;t be used to repair production data…
[15:46:21] <rspijker> if you use it, as most people, for reclaiming disk space, it’s fine
[15:46:31] <rspijker> in fact, it’s pretty much your only real option
[15:46:47] <topwobble> rspijker: for us it’s repair. we have a backup though so we probably wont go this route
[15:46:50] <topwobble> running validate now
[15:47:26] <rspijker> ah, yeah… then the caveats totally apply
[15:47:35] <rspijker> I actually see they made it more clear now in the docs
[15:47:48] <rspijker> in the past it just used to say: “don’t use this in production”
[15:56:31] <paulbeuk> rspijker, upgrading to 2.6 solved the regex problem!
[15:56:50] <rspijker> glad to hear it :)
[16:23:30] <mchapman> Am trying to use gridfs with a root collection name other than fs using the node driver and am getting "need an index on { files_id : 1 , n : 1 }". I thought it should create the index itself?
[16:30:16] <mchapman> ignore that - reinstalling the driver has made it go away
[17:40:51] <SkramX> hey all. I've been using mongodb for a while but have always just done M/R. I want to try the aggregation framework. How would I do an aggregation over some of a collection? (ie where a key is equal to a given value)
[17:42:49] <SkramX> is that the use of match?
[17:43:10] <jordana> YEs
[17:43:42] <SkramX> i thought use was used on the resulting stream of documents, not the incoming stream of documents
[17:43:51] <SkramX> *i thought match was used...
[17:44:01] <SkramX> jordana: can you confirm which it is ofr
[17:48:50] <jordana> SkramX: In the pipeline you would usually use $match first
[17:48:58] <jordana> to filter out results
[17:49:18] <SkramX> ah.. duh, okay, the array is ordered so match would go first and then project and then group
[17:50:48] <jordana> SkramX, yeah. I have something that matches, groups by a certain value with the sum of a field and then that is output either inline or to a collection
[17:51:51] <jordana> certain field*
[17:59:18] <SkramX> anyone using mongomapper?
[18:16:27] <SkramX> jordz_away: available for one more quick question?
[18:21:14] <SkramX> once i add a project the other fields stop being returned
[18:25:38] <SkramX> nvm
[18:35:45] <ejb> Hi, I have a situation where a Location has many Shows (as an embedded array of objects). There is also a Shows collection that contains the same data. I'm advocating for removing the Shows collection all together because it's essentially only there for archiving. Is there any downside to archiving Show data on the Location as an embedded array of objects? e.g. If the array grows to thousands of objects?
[19:17:59] <cozby> hi, in mongo when you're adding a dbAdminAnyDatabase role to a user, what db do you specify
[19:18:07] <cozby> because that role should be able to read all db's
[19:18:14] <cozby> do I just specify admin as the db?
[19:21:56] <Ravenheart> its a smystery cozby
[19:41:59] <emacsen> anyone here work on pymongo?
[19:56:28] <revoohc> Does anyone know if a mongodump is version safe? We need to archive ~1TB of data from 2.2.4. If I use mongodump, will 2.6 or 2.8+ mongorestore be able to read it?
[19:56:37] <benjwadams> greetings, I've inherited a project using mongodb and am a relatively new user. I need to add a field inside an array of subdocuments. How might i go about doing this?
[19:57:00] <benjwadams> i.e. for each subdocument within each array, add a field.
[19:58:19] <cheeser> benjwadams: look at $push and $addToSet
[19:59:02] <topwobble> I just ran ‘db.validate()’ on a corrupt collection. The only issue is with corrupt indicies. Do you know if running repair will delete the indicies, or the indicies + games associated with them?
[19:59:17] <topwobble> Here’s the output of db.collection.validate(true): https://gist.github.com/objectiveSee/7b5849f310fee4ba74aa
[19:59:24] <benjwadams> i don't think $push is what i'm looking for. I don't want to add a new element to the array
[19:59:35] <benjwadams> let me pastebin the schema really quick
[20:02:55] <benjwadams> http://pastebin.com/BCdtNJe3
[20:04:07] <benjwadams> cheeser: $addToSet is not what I'm after either
[20:04:26] <cheeser> you want to add a field to the document in that array?
[20:05:50] <benjwadams> yes.
[20:06:11] <benjwadams> all the documents in the array, actually
[20:42:05] <gswallow> I have a (probably common) question about readahead on EC2.
[20:42:08] <gswallow> http://pastebin.com/LEALMf5s
[20:42:32] <gswallow> \/var/lib/mongodb is a bind mount
[20:42:52] <gswallow> why is it telling me I have an excessive readahead?
[20:44:27] <jpavlick> imagine a collection, `users` with documents of the form `{ _id: ObjectId, blocks: [ObjectId] }` What is the most efficient way to query `Give me all users there user {_id: 12} is *not* blocking`? I can only think of a two part blocks=query: find(_id: 12).blocks, find(_id: $nin blocks)
[20:44:41] <jpavlick> *that user {_id: 12}
[21:02:03] <sandstrom> I've got an issue with replication. A while ago optime variance increased (see pic: http://s1.postimg.org/wnjsxsrxr/Screen_Shot_2014_08_20_at_22_57_13.png). Network activity is down on the node. CPU and disk usage is unchanged.
[21:02:52] <sandstrom> What should I dig into to determine the cause? My hunch is that replication has stalled/hung, or is severely restrained somehow (due to replication of large amounts of data perhaps??).
[22:24:06] <jasondockers> Can I export documents and then import them to another database?
[22:24:55] <cheeser> yes
[22:50:32] <joannac> you should dump and restore instead jasondockers
[23:17:57] <shinka> I'm trying to do a search inside an object (NodeJS/Mongoose) and for some reason (probably something simple) it doesn't work. My Schema has "var WordSchema = new Schema({ meanings : [{ gloss: [String] }} });" and my search has "WordSchema.static('findByGloss', function (x, cb) { return this.find({ meanings: [ { gloss: { '$in': [x] } } ] }, cb); })";
[23:18:07] <melvinrram> I'm trying to figure out how to use $group inside aggregate to get the latest document that has the unique combo of fields I'm looking for. For example, I have the collection on https://gist.github.com/melvinram/a32d44a0554598d07c87 and I also have a unique_combos that I want to look up by. I have the $match query returning just the right documents
[23:18:07] <melvinrram> , and $sort setup so it returns it in reverse chronological order. I took a stab at it and the code is on the gist linked to. It's currently not returning correct values. (currently returns no values).
[23:18:11] <melvinrram> Any ideas?
[23:33:58] <jasondockers> How do I add an objectid to a field with the c++ driver? .append("value", mongo::OID("53c01ca23a0dcb37ceaebef7)) doesn;t work?
[23:43:11] <Ryan_Lane1> I have an issue with the PHP driver (1.5.5) where if a node is added to a heavily loaded load balancer, the first request for each apache process returns "No candidate servers". We have a loop that will exponentially increase the connect time and re-try the connection, but in this condition the loop goes through all iterations immediately, without any timeout
[23:43:41] <Ryan_Lane1> I think for some reason all servers are being immediately added to the blacklist, but can't determine why this would be the case
[23:44:00] <Ryan_Lane1> subsequent requests seem to work (past the ping_interval)
[23:44:29] <Ryan_Lane1> I also occasionally see this condition on apache gracefuls and restarts
[23:44:40] <Ryan_Lane1> (under lesser load)