PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 22nd of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:28] <hahuang65> is there a way to turn a balancer on immediately?
[00:04:11] <joannac> sh.startBalancer()
[00:15:12] <hahuang65> thanks
[00:42:32] <adoming> hey guys can you find a sub sub document by objectId in a db.collection.find()?
[00:43:13] <cheeser> subdocuments don't typically have IDs but yes you can query by properties of subdocs.
[00:43:35] <adoming> i meant ref actually
[00:44:20] <adoming> in other words i have a grandParentDocument ref parentDocument ref childDocument
[00:45:02] <adoming> i want to in one query db.grandParentDocument.find() by a property of childDocument
[00:45:23] <cheeser> what?
[00:45:45] <adoming> i'll get a paste of my schema to show what i mean
[00:45:48] <adoming> one moment
[00:58:30] <adoming> OK so to reiterate, I am doing a query on this schema http://pastebin.com/TnLm4syg and I am referencing other schemas which in turn are referencing another schema by objectId. My question is how I can db.colleciton.find() data from a referenced, referenced schema (sorry I'm a noob if this sounds dumb).
[01:14:43] <morenoh149> anyone here doing the mongodb for node devs mooc?
[01:15:33] <joannac> adoming: you are asking a mongoose question.
[01:15:52] <adoming> joannac: ok i'll take it over there
[01:22:24] <morenoh149> what does findOne return if no document matches? http://docs.mongodb.org/manual/reference/method/db.collection.findOne/
[01:22:33] <joannac> nothing?
[01:22:43] <morenoh149> undefined then
[01:23:08] <joannac> no, not undefined. nothing
[01:23:32] <morenoh149> in js I mean
[01:24:33] <joannac> http://pastebin.com/yyd17ESK
[01:37:45] <morenoh149> `Cannot use a writeConcern without a callback` what do?
[01:40:03] <morenoh149> dammit it differs http://mongodb.github.io/node-mongodb-native/2.0/api-docs/
[02:59:01] <morenoh149> with the node driver. How can I aquire a write lock?
[03:02:04] <morenoh149> 3 operations give a write lock http://docs.mongodb.org/manual/faq/concurrency/#which-operations-lock-the-database
[05:28:58] <Climax777> hi all. philosophical question: I get what mongodb is good for and not good for. but what can one say about mysql? what is it good for and not good for?
[05:39:21] <arussel> I have document {a: 1, b: 2, c:3}, if I add an index on a,b will it be used of query on abc ?
[06:06:18] <DragonPunch> hey pals
[06:06:22] <DragonPunch> im using mongoose mongodb orm
[06:06:36] <DragonPunch> to get the last id in the db for a specific user
[06:06:37] <DragonPunch> do i do this
[06:07:57] <DragonPunch> Message.findOne({Email: asdf@gmail.com}, { sort: {'created_at' :-1} }, function(err,data) { // Do stuff });
[07:18:22] <x44x45x41x4E> Hi, I've got a question. I'm backing up a MongoDB instance that is on production using mongodump, will my dump be more reliable if use oplog option? Thanks. http://docs.mongodb.org/manual/tutorial/backup-with-mongodump/#point-in-time-operation-using-oplogs
[07:22:19] <Viesti> hum, thinking of representing a tree structure in mongo as just one document, having operations to add and delete nodes
[07:22:32] <Viesti> finding out that $pull doesn't support rmeove by array index
[07:22:47] <joannac> x44x45x41x4E: standalone?
[07:23:24] <Viesti> so thinking that maybe child nodes nodes should live under ObjectId keys, instead of in an array
[07:23:29] <x44x45x41x4E> joannac: Nope, it's on a VPS. Just to be clear, what do you exactly mean by 'standalone'?
[07:24:10] <joannac> what kind of mongod deployment are you running?
[07:24:27] <joannac> is it a replica set, sharded cluster, or a single mongod instance (e.g. standalone)
[07:28:05] <x44x45x41x4E> joannac: It's a single mongod instance.
[07:29:50] <joannac> x44x45x41x4E: then there is no oplog
[07:31:00] <x44x45x41x4E> joannac: Alright, thanks. I just thought there are benefits to using the oplog option. Newb to MongoDB. Thanks. :)
[07:31:50] <joannac> there are, but since you're not running a replica set, you can't take advantage of them... :p;
[07:37:58] <x44x45x41x4E> joannac: Okay. I'll research on that too. Thanks again. :)
[07:40:11] <morenoh149> can you perform an update within a `toArray`?
[08:00:10] <morenoh149> finally the answer to all my questions http://mongodb.github.io/node-mongodb-native/2.0/tutorials/crud_operations/#toc_5
[08:00:13] <morenoh149> joannac:
[08:11:29] <pcuser> hi
[08:49:44] <Yogurt> Hello.
[08:50:58] <Yogurt> Is there anyone
[08:55:03] <gonace> Hi, is there a way to verify if one have used: db.runCommand( { logRotate : 1 } ) apart from checking the log dir?
[09:00:16] <gonace> Hi, is there a way to verify if one have used: db.runCommand( { logRotate : 1 } ) apart from checking the log dir?
[09:00:57] <Yogurt> gonace noone is here to help you.
[09:01:03] <Yogurt> I tried that before (:
[09:01:22] <gonace> Yogurt, :)
[09:03:19] <Yogurt> Yeah. (: Iam here too. I mean for help. And i asked so many people. So, noone answer. Btw iam hanging around for chat. Coz, it's been over 20 years to chat with mIRC in my life.
[09:03:24] <Yogurt> I feel younger hahah
[09:10:08] <gonace> Yogurt, haha...the old quakenet times :)
[09:10:51] <hurdos> Hi
[09:11:36] <Yogurt> gonace yeah bro. Iam hangin on metal channel right now. Like oldies. (: :d
[09:11:58] <Yogurt> hurdos hi. There is oneone in here except gonace and me.
[09:12:10] <hurdos> How to dump and clear collection or db in live mode?
[09:12:47] <Yogurt> (: Sorry dude, i dont have an idea for your problem
[09:14:37] <hurdos> sad
[09:17:55] <Yogurt> So, can i ask you question too ?
[09:53:13] <Tomasso> hello, I read documents from a collection, I processed and modify them, I also remove the _id field, and insert it into another collection, I get DUPLICATE KEY. I do the same, but insert in a collection from a different database, and I get the same dupplicate key... I'm lost on this. I read there was a bug on smth like that, I use mongo 2.6.1
[09:54:17] <Tomasso> also i don't understand since I removed _id field.. I print the doc to stdout and _id is not there.
[10:55:59] <aaearon> what are others using in regards to python orm-like layors for mongodb
[10:57:53] <Yogurt> aaearon sorry bro. but oneone is here for help. you have to believe me. i tried so much (:
[10:58:30] <aaearon> nah i was helped here yesterday, you just have to be patient :)
[11:04:40] <ZadYree> Hello
[11:31:49] <temhaa> hello
[11:31:59] <temhaa> I am looking for mongosb shell client for window
[11:32:02] <temhaa> *windows
[11:34:48] <StephenLynx> did you try installing windows?
[11:34:50] <StephenLynx> linux*
[11:34:54] <temhaa> Also I have another question. in mongo server there are some problem. mongo slave output: http://picpaste.com/mongoslave-7fl71XCc.png
[11:35:29] <temhaa> what is reason has a 747hour lock I guess
[11:59:14] <temhaa> ?
[11:59:18] <temhaa> I need help
[11:59:35] <temhaa> about master slave replication
[11:59:53] <StephenLynx> what are the servers running?
[11:59:54] <joannac> erm, why?
[12:00:26] <joannac> master-slave is not really supported any more
[12:01:50] <joannac> also, I don't see the problem in your output
[12:03:23] <joannac> you just have a very inactive set
[12:10:59] <temhaa> joannac: How to check master-slave repl.
[12:11:07] <temhaa> joannac: you say I understood
[12:11:52] <temhaa> joannac: Actually our problem is master log very big but slave logs very small. I need check master slave repl. is true.
[12:53:08] <Tomasso> after modifying an object and inserting it to another collection, i get Duplicate key. I tried to delete the _id entry, and same error... I tried ticket["_id"] = BSON::ObjectId.new and again same error.... i dont know what to do
[12:55:06] <StephenLynx> try printing the object you are inserting before inserting it.
[12:58:51] <Tomasso> im generating the _id by code, this is what it prints now {"_id"=>BSON::ObjectId('54be4d05c783c41d5c000001') ..........
[12:59:23] <StephenLynx> dont bother with the ID.
[12:59:23] <Tomasso> if I delete the _id field prints the document without _id..
[12:59:29] <StephenLynx> yes
[12:59:30] <StephenLynx> do that.
[12:59:40] <StephenLynx> mongo will assign an ID when inserting the object.
[12:59:44] <StephenLynx> just roll with it.
[12:59:56] <StephenLynx> delete it and then save the object.
[13:00:15] <Tomasso> if I delete _id , I keep gettiing duplicate key...
[13:00:27] <StephenLynx> wait
[13:00:35] <StephenLynx> you are inserting the object without the _id
[13:00:40] <Tomasso> yess
[13:00:41] <StephenLynx> and it says it has a duplicate id?
[13:00:47] <Tomasso> yess
[13:00:51] <StephenLynx> are you printin the error message? it says the duplicate field.
[13:01:02] <Tomasso> in both cases, with _id, and without _id
[13:02:54] <StephenLynx> I think you are messing up something and not actually removing the id. theres no way it would says it has a duplicate field when the object does not even contains such fiel
[13:02:57] <StephenLynx> field*
[13:07:24] <Tomasso> yeah.. you were right xD I catch the exception and was printing duplicate key... not the real error message xD
[13:07:33] <Tomasso> i think i need some rest
[13:08:11] <Tomasso> thanks, so much
[13:17:09] <temhaa> I want control is everything ok. for master slave replication
[13:17:21] <temhaa> How to check master slave replication
[14:15:05] <Tomasso> how do i get the size of a document that is in the bson format? if possible from ruby mongo driver better..
[14:26:26] <StephenLynx> that is specific to your language and driver.
[14:27:07] <StephenLynx> I assume the BSON format is just another object
[14:27:23] <StephenLynx> and if you can get the size of an object, you could use the same for these bson objects.
[15:11:08] <kexmex> guys
[15:11:15] <kexmex> how come mongodb packages are not hosted on https
[15:11:17] <kexmex> is that so hard to do?
[15:11:25] <kexmex> and it's not gpg signed
[15:11:31] <kexmex> how do i know i am not downloading some crap
[15:16:05] <cheeser> the .deb and .rpm files are i believe
[15:16:51] <kexmex> i am talking about yum pkg
[15:16:55] <kexmex> for centos
[15:17:16] <cheeser> and the downloads page support https
[15:17:30] <kexmex> i tried it didnt seem to be the case
[15:17:32] <cheeser> you could file a bug on the packaging
[15:17:34] <cheeser> https://www.mongodb.org/downloads
[15:18:19] <kexmex> there's a bug already
[15:18:24] <kexmex> since 2013
[15:18:48] <StephenLynx> indeed, the mongo repos are not on https
[15:19:00] <StephenLynx> if you try and use https instead of http will it work?
[15:19:23] <cheeser> vote it up, then
[15:19:23] <kexmex> doesnt work
[15:19:26] <kexmex> https://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
[15:19:34] <StephenLynx> welp
[15:19:44] <kexmex> it's pretty mind boggling to be honest
[15:19:58] <BurtyB> end of the world job obviously
[15:20:28] <StephenLynx> when you think about what would take for someone to place a MIM attack on that
[15:20:28] <kexmex> you
[15:20:33] <kexmex> StephenLynx
[15:20:34] <StephenLynx> is not that much of a threat
[15:20:38] <kexmex> so if i am in Russia
[15:20:45] <kexmex> you dont think that file is being replaced at ISP level?
[15:20:49] <kexmex> it's so easy for them to do that
[15:21:17] <StephenLynx> of course, but what would take for someone targeting you to place the attack?
[15:21:22] <kexmex> not me
[15:21:24] <kexmex> targeting everyone
[15:21:26] <kexmex> at ISP level
[15:21:30] <kexmex> why would they target specific persons
[15:22:01] <kexmex> all that encryption stuff is there, but it's just strange that it is not used
[15:22:15] <StephenLynx> ok, now what would be the costs for them to target every single person and not get caught so everyone would become aware of it?
[15:22:33] <StephenLynx> I understand it would be easy for them to place
[15:22:35] <StephenLynx> and they should
[15:22:44] <kexmex> caught, do they care
[15:22:48] <StephenLynx> my argument is just that is not something to get that concerned
[15:22:54] <kexmex> maybe not
[15:22:56] <kexmex> but it's pretty annoying
[15:23:01] <StephenLynx> they care because then people would not get caught in it
[15:23:06] <kexmex> people spend time and resources to secure their systems
[15:23:08] <StephenLynx> is like laying a trap in plain field
[15:23:14] <kexmex> stuff like 35 thousand dollar VPN appliances
[15:23:25] <kexmex> and then what's the point when there's stuff like this
[15:23:34] <StephenLynx> contact mongo
[15:23:39] <StephenLynx> just dont lose your sleep over it
[15:23:56] <kexmex> :)
[15:24:29] <StephenLynx> but if you ask me, there is probably something in place
[15:24:45] <StephenLynx> that makes it more secure than it seems
[15:37:55] <kexmex> i guess paid version is signed?
[15:41:40] <StephenLynx> wait, there is a paid version?
[15:42:11] <kexmex> i am guessing there is :)
[15:42:22] <kexmex> it looks like src tarballz are signed
[15:43:18] <StephenLynx> you can just clone using https from githu
[15:43:35] <StephenLynx> so why would one pay to have it on a tarball?
[15:43:45] <kexmex> wudnt want latest srcs tho
[15:44:06] <StephenLynx> who would pay to beta test it?
[16:30:37] <bbryan> motd
[16:35:39] <the_drow> Can we drop a collection to free disk space without repairing which requires downtime?
[16:36:49] <StephenLynx> I never had to turn my db off to drop a collection.
[16:51:36] <neo_44> dropping a collection will not free disk space...just FYI
[16:51:40] <neo_44> the_drow:
[16:51:49] <the_drow> I know
[16:52:20] <the_drow> And we can't afford downtime that is caused by running --repair
[16:52:29] <the_drow> neo_44: Is there an alternative?
[16:52:42] <neo_44> yeah
[16:53:10] <neo_44> add a completely new node to the replica set
[16:53:16] <neo_44> let it sync...that will compact
[16:53:27] <neo_44> remove an old node
[16:53:37] <neo_44> repeat until replica set is compacted
[16:53:47] <neo_44> how much data is on the replica set now the_drow?
[16:54:36] <the_drow> We don't have a replica set
[16:55:10] <the_drow> We're a single node. We have more then 300GB of data.
[16:58:24] <neo_44> then make it a replica set
[16:58:36] <neo_44> should never run Mongo in single mode
[16:59:09] <cheeser> doing that's fine. ish.
[16:59:11] <neo_44> except for testing
[16:59:39] <neo_44> cheeser: only way I see compacting without down time...
[17:00:52] <cheeser> well, that's different from "you should never ..."
[17:01:25] <neo_44> I stand behind "you should never run Mongo in single mode in production"
[17:04:32] <cheeser> i do. :)
[17:04:40] <cheeser> as in, i run a single node in prod.
[17:04:56] <cheeser> my needs aren't that big, though, on this one.
[17:23:57] <neo_44> cheeser: one of first things I look at in any architecture is fault tolerance.....with single node you have to fail over, so you have a single point of failure
[17:24:19] <neo_44> you also can't do maintenance to mongo without down time in single mode.
[17:24:57] <neo_44> and from client side a single node is the same as a replica set...so there are no code changes....just connection string needs to be updated to have seed list
[17:57:22] <jayjo> I'm using mongoengine to interact with mongo, is it possible to return an object in stead of a queryset when I query the database?
[18:22:07] <imachuchu> so I need to implement a changelog functionality to an existing collection, tracking only modifications to individual documents. I'm thinking of doing it by appending a subarray on each document that stores a subdocument for each change (so who, when, what, old value, and new value). Since I'm a bit new to MongoDB I'm wondering does this sound good or is there a brighter way?
[18:23:33] <neo_44> jayjo...yes by default it returns a cursor
[18:23:47] <neo_44> but if you specify in the find method you can get the object
[18:23:51] <neo_44> example coming
[18:25:21] <neo_44> jayjo: Client.objects.get(__raw__={"_id": oid}) this will return single document
[18:27:27] <neo_44> imachuchu: why do you need to track all deltas?
[18:27:59] <neo_44> imachuchu: could do you need to know what changes or just that something changed? ... is this to roll back a change?
[18:34:13] <GothAlice> imachuchu: If one needs to track document changes over time, one could watch the oplog the same way MongoDB replication does, making note of updates across the desired collection. Then, to track additional information (beyond the literal query used to make the change) you can use $comment within the query to pass additional data through. I do this to pass the User ID who requested that update.
[18:35:26] <GothAlice> https://github.com/cayasso/mongo-oplog is an example library to provide this type of low-level oplog functionality.
[18:36:38] <neo_44> Depending on the use case I agree with GothAlice....this would be great for auditing if needed
[18:37:08] <imachuchu> neo_44: it's so that a manager can audit to see what all has been changed in a record and by what employee (essentually)
[18:37:31] <GothAlice> Bam, imachuchu: mongo-oplog will be your friend. :)
[18:37:36] <imachuchu> GothAlice: that's almost exactly why I figured I should ask here, let me take a look and see if that's closer to what I want/need
[18:37:51] <imachuchu> GothAlice: I'll go take a look at that too, thanks!
[18:39:22] <neo_44> imachuchu: I would add the $comment as GothAlice suggested on all updates to the record
[18:39:38] <neo_44> that will give you WHO....that is what I have done in the past
[18:40:03] <neo_44> imachuchu: if you use a service layer it could get tricky because the WHO would be the service layer not the person...FYI
[18:42:45] <neo_44> imachuchu: you could also use a seperate capped collection to write the old document and who changed it....this would clean it self up over time but would give you insight for a certain time period
[18:47:46] <imachuchu> neo_44: that was coming up on my next question, is there any reason this shouldn't be stored inside a seperate collecion inside MongoDB? While there shouldn't be too many changes (>100, most likely >50) per record I'm still a firm believer in preparing for the worst
[18:48:52] <saml> {"x": {}} how can I query for docs where x has at least one key? such as {"x":{"y":1}}
[18:49:07] <imachuchu> the "who" will be a bit more difficult, since it's who in the application changed the record, not who in MongoDB, but I'll look into implementing mongo-oplog and see what I can find
[18:49:34] <neo_44> imachuchu: are the changes atomtic and replace the entire document? or a lot of little updates?
[18:50:24] <neo_44> imachuchu: I would use seperate collection and bucket all the changes so 1 query would return all changes for that doc over time
[18:50:28] <neo_44> make it a capped collection
[18:50:31] <imachuchu> neo_44: since I'm writing the frontend that's doing the changes right now, whatever's best. I believe they are little changes but I can make them all one (I'm using Mongoengine and haven't really checked)
[18:51:31] <neo_44> if you use mongoengine and just save the object (object.save()) it will replace the entire document....so that is multiple chagnes at once....then you could just overwrite the .save() to copy the document to the new collection first
[18:51:39] <neo_44> then save it....you will have both the old and new everytime
[18:52:09] <neo_44> I use mongoengine but have wrapped it in a data access layer to abstract away the mongoengine piece for the UI/API guys that use my DAL.
[18:52:43] <neo_44> saml: {x : { $ne : null}}
[18:52:57] <saml> neo_44, that'll pick {x:{}}
[18:53:03] <saml> i don't want to pick that bro
[18:53:46] <neo_44> saml: you can't query on keys
[18:53:48] <neo_44> only values
[18:54:08] <neo_44> you could use array or there is x.$...that returns first matching value
[18:54:17] <neo_44> or a nested sub document
[18:54:36] <saml> db.webscale.find({x:{$ne:{}}}) hehehehehehehehehehehehhe
[18:54:49] <imachuchu> neo_44: hmm... I see. I'll need to look into what I can get because storing a full copy on every update to the capped collection seems wasteful
[18:54:49] <saml> then i can go from there via aggregation framework
[18:55:33] <saml> how do i aggregate each key of x?
[18:56:07] <neo_44> imachuchu: what seems wasteful? duplicating the data?
[18:56:07] <saml> db.asdf.aggregate({$match:{x:{$ne:{}}}}, .... want to get count for each key x.$key
[18:56:19] <neo_44> saml: pipe it to count
[18:56:28] <neo_44> agg lets you pipe one result to another query
[18:56:34] <saml> example
[18:57:27] <neo_44> http://docs.mongodb.org/manual/reference/operator/aggregation/group/
[18:57:29] <neo_44> bottom of page
[18:57:40] <imachuchu> neo_44: duplicating the whole document on every commit when I'm only looking at storing the deltas
[18:58:27] <neo_44> imachuchu: with a capped collection there is no fragmentation and how big are the docs really? It would be easier to just store a copy then try to only store the deltas... .
[18:58:41] <neo_44> just depends on the complexity you want in the code vs easy of use
[18:58:59] <saml> i think i need to split {x:{a:1,b:1}} into {'x.a':1} and {'x.b':1} and then group
[18:59:25] <neo_44> saml: you should use elastic search for aggregation
[18:59:35] <neo_44> ;)
[18:59:35] <saml> yah elastic search is web scale
[19:00:12] <neo_44> if you just need to agg something it is fast..
[19:00:46] <imachuchu> neo_44: true, and the documents aren't that big, so it's not too big of a deal. I'll go implement something and come back here with any questions. Thank you for all of your help
[19:00:57] <neo_44> imachuchu: any time
[20:56:53] <Sengoku> Hey, what's the standard maven mongo migration plugin (if any)?
[21:01:42] <cheeser> i don't think there is one.
[22:01:27] <FunnyLookinHat> I have a collection of "particles" - and each particle belongs to a beam ( one beam can have many particles ). https://github.com/funnylookinhat/tetryon/blob/master/spec/particles.txt
[22:01:44] <FunnyLookinHat> I chose to embed the beams inside the particles because it will make generating reports and whatnot far faster
[22:02:03] <FunnyLookinHat> But I'm seeing an issue - I want the beam attached to each particle to always be the same -
[22:02:43] <FunnyLookinHat> The only way I can think to do that is, when I create a new particle, read another with the same beam.id and then use that beam information in creating the new particle
[22:02:51] <FunnyLookinHat> Is that... the "mongo" way to do it?
[22:11:39] <kexmex> guys
[22:13:04] <kexmex> so like, due to non existance of a signed mongodb rpm for centOS, i am trying to build my own. How do I build rpm, after building from source?
[22:13:11] <kexmex> i tried to use the spec in rpm folder
[22:17:31] <francescu> saml: and why not checking the $type? if x is Object and ne to {}
[22:18:15] <francescu> saml: oops sorry my history was too high
[22:21:53] <joannac> kexmex: what? http://docs.mongodb.org/master/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/
[22:22:19] <kexmex> joannac: not signed
[22:22:30] <kexmex> and repo is not https
[22:23:00] <cheeser> this? still?
[22:23:25] <kexmex> cheeser: what
[22:23:39] <cheeser> signings and repos
[22:23:44] <kexmex> not valid?
[22:24:09] <cheeser> well, nothing's changed in the last 12 hours on that front.
[22:24:16] <kexmex> i guess you didnt read up
[22:24:23] <cheeser> i did.
[22:24:30] <kexmex> i am building from source
[22:24:33] <kexmex> since git is https :)
[22:24:41] <kexmex> but i want to create rpms
[22:24:42] <kexmex> from the build
[22:28:09] <kexmex> cheeser: what kinda company do you work for?
[22:28:51] <cheeser> mongodb :)
[22:30:18] <joannac> kexmex: the repo has an rpm subdirectory with .spec files, that should be enough
[22:30:55] <kexmex> joannac: tried rpmbuild -ba rpms/mongo-org...spec
[22:31:08] <kexmex> is that the way to do it?
[22:31:17] <kexmex> it complained about some syntax errors, didnt have time to look into it yet
[22:31:25] <kexmex> this is on centos
[22:32:18] <kmaru_p00ps> is there a way to make rs.status() print all status in a line instead of the default pretty() way?
[22:33:22] <kexmex> kmaru_p00ps: ugly()
[22:33:56] <joannac> kmaru_p00ps: don't think so. what's the use case?
[22:34:34] <kmaru_p00ps> joannac: get all info for hosts in one line, like I can do with the other db commands and parse it in a shell script
[22:34:56] <kexmex> isnt it just json
[22:34:58] <kexmex> u can parse it either way
[22:35:32] <kmaru_p00ps> parsing pretty() with just basic shell script? and not using other tools..can be done..but not easy.
[22:35:54] <kexmex> oh
[22:36:18] <kexmex> use jq
[22:36:21] <kexmex> and pipe it in there
[22:36:44] <kexmex> http://xmodulo.com/how-to-parse-json-string-via-command-line-on-linux.html
[22:37:21] <kmaru_p00ps> that's still installing and using other tools...
[22:37:32] <kexmex> well ok :)
[22:38:08] <kmaru_p00ps> sorry...don't want to have to use other tools. why is rs.status() so special that it's default is pretty()? *sigh*
[22:38:20] <kmaru_p00ps> with no other option
[22:38:41] <kexmex> so if they add more elements
[22:38:49] <kexmex> to rs.status() or other commands, doesn't your code break
[22:38:53] <kexmex> better to use a json parser imho
[22:41:12] <joannac> kexmex: open a SERVER ticket if it bothers you that much
[22:41:22] <joannac> wait, not kexmex. kmaru_p00ps
[22:41:47] <kexmex> joannac: so how ot build rpm? :)
[22:44:21] <joannac> kexmex: no idea. I suggest you find some time to look into the errors you got
[22:44:43] <kexmex> can you look up on your buildserver? :)
[22:47:48] <joannac> kexmex: file a BUILD ticket
[22:48:02] <kexmex> :)
[22:49:14] <joannac> actually, probably a DOCS ticket