PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 28th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:12:31] <hephaestus_rg> is there a fast way to get the first item added to a mongodb collection? db.articles.first isn't a function
[00:13:22] <hephaestus_rg> articles being the name of the collection i'm concerned with
[00:19:22] <jumpman> hephaestus_rg: what makes this item the 'first' in the collection? first chronologically?
[00:20:08] <jumpman> For standard tables, natural order is not particularly useful because, although the order is often closer to insertion order, it is not guaranteed to be.
[00:20:39] <jumpman> for instance: if a document is updated and doesn't fit their currently allocated space or if a new document is inserted into one of the gaps that might be left by something like this
[00:21:03] <jumpman> If you want a specific order, you must specify one. Except if you use a capped collection
[00:21:51] <jumpman> unfortunately with capped collections... documents cannot be deleted or moved (resized).
[00:22:31] <jumpman> If your usecase fits a capped collection then that's a fairly easy way to implement it. Otherwise, you'll need to add a timestamp field
[00:22:56] <joannac> Or use the default objectId
[00:23:44] <jumpman> can you use the default objectId to order by insertion? i thought they weren't supposed to be countable? are those the same thing?
[00:26:57] <joannac> jumpman: http://docs.mongodb.org/manual/reference/object-id/
[00:27:11] <joannac> the objectid's first 4 bytes is a timestamp
[00:42:48] <GothAlice> joannac: Can the aggregation pipeline do date-like things with ObjectIds yet?
[00:48:43] <joannac> GothAlice: don't think so
[03:39:57] <xcesariox> where do i put this "results = Book.collection.map_reduce(map, reduce, out: "vr")" command syntax into? into rails console or mongodb console directly? https://gist.github.com/shaunstanislaus/0f8d87939c0ab01ce5d6
[03:42:49] <cheeser> xcesariox: when you asked that yesterday, i told it wasn't valid shell syntax
[04:05:50] <xcesariox> cheeser: so how does mapreduce works
[04:06:04] <xcesariox> cheeser: cause the book i am following doesn't tell me where to enter this commands/functions
[04:11:01] <cheeser> looks like a ruby driver thing.
[04:32:10] <Freman> "$err" : "BSONObj size: 24852869 (0x17B3985) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: { task_id: \"54add46f370a791400350a68\", failed: false }",
[04:32:16] <Freman> I think there's too much logs
[04:39:55] <Freman> db.log_outputs.aggregate([{$sort: {date: -1}}, {$group : {_id: {task_id: "$task_id", failed: { $ifNull: [ "$failed", false ] }}, logs: {$push: "$_id"}}}], {allowDiskUse: true}).forEach(function(doc) {doc.logs.splice(0, 50); db.log_outputs.remove({_id: {$in : doc.logs}})});
[04:41:22] <joannac> probably
[04:41:31] <joannac> wait, no
[04:42:14] <joannac> wait, yes. I can't count, sorry. yes, your output document looks like it's too long
[04:44:44] <Freman> have to find another way to prune it huh?
[04:45:14] <Freman> (in future this will be run every 60 seconds, so there should never be that many logs, this is just the first "oh crap we're out of disk space" cleanup)
[04:47:39] <joannac> put a $limit in there?
[04:48:05] <Freman> yeh thinking that's the easiest option
[04:53:59] <Freman> lets try a 500,000 limit
[07:12:59] <mah454> Hello
[07:13:22] <mah454> Can share mongodb database on san storage ?
[07:13:44] <mah454> I want to run multiple mongodb instance on one database .
[07:13:55] <joannac> wait what?
[07:14:29] <joannac> mah454: You want to run multiple mongod processes that all share the same dbpath?
[07:14:42] <mah454> joannac: yes
[07:14:45] <joannac> NO
[07:14:47] <mah454> can do that ?
[07:14:50] <joannac> 100% absolutely not
[07:15:02] <joannac> what's the usecase where you need to do that?
[07:15:43] <mah454> joannac: I use docker linux container .
[07:16:40] <mah454> joannac: I try to use single storage for all mongodb proccess .
[07:17:22] <joannac> why?
[07:18:20] <joannac> okay, let me elaborate. why do you need more than one mongod process? why can't they all connect to one mongod?
[07:18:28] <mah454> joannac: try to find new idea for data replication .
[07:19:55] <mah454> and HA
[07:20:25] <joannac> okay, well the answer is no
[07:41:21] <cxz> hi, i'm trying to filter an array in an array where the inner array has a key which should matche one value and another key which should not match, how can i do this?
[07:43:58] <joannac> cxz: pastebin an example
[07:53:33] <cxz> joannac: http://pastebin.com/LV9teQvq
[07:54:13] <cxz> in that example, i want the first 2 items in the boxes array because it has the 'name' of '1' and an inner name of 'Test' in the 'details' array
[07:54:40] <cxz> It has to match both criteria
[07:56:33] <joannac> why the hell is details an array instead of a document?
[07:57:24] <joannac> cxz: ^^
[07:58:47] <cxz> i dont know
[07:58:49] <cxz> thats just how it is
[07:59:13] <cxz> i would change it if i could
[07:59:14] <cxz> but i cant
[07:59:23] <joannac> why not?
[07:59:28] <cxz> uh, work
[08:02:16] <cxz> yes indeed
[08:03:49] <joannac> Go to your boss / PM and say "we need to change the schema because we cannot query what we need to without scanning every document in the collection"
[08:04:58] <cxz> but i guess we need to scan every document anyway?
[08:05:14] <joannac> okay then
[08:05:21] <cxz> i'm just asking
[08:05:39] <cxz> i mean we have to find that name: 1 and name: Test
[08:05:51] <cxz> which means traversing pretty deep
[08:06:36] <joannac> db.yourcoll.find().forEach(function(o) {<iterate the items array, iterate the boxes array, iterate the details array>})
[08:06:44] <joannac> It will be *really* slow
[08:16:51] <cxz> ok
[08:16:52] <cxz> thanks
[08:17:33] <joannac> you should really fix your schema.
[08:17:56] <joannac> the above is going to perform terribly. I do not recommend you actually do that
[08:19:24] <cxz> ok
[08:19:31] <cxz> i shall take that into consideration
[08:19:35] <joannac> aha, having looked again
[08:20:19] <joannac> There are 2 ways you can search arrays: return the whole array, or return 1 element in an array
[08:20:38] <joannac> You want 2 elements from the boxes array: not possible
[08:21:50] <joannac> If you ever find yourself wanting multiple elements in an array, your schema is wrong for mongodb
[08:23:39] <cxz> but if boxes was a dict how would that work?
[08:24:11] <joannac> how do you distinguish boxes?
[08:24:22] <cxz> hmm, well they're unique
[08:24:27] <joannac> unique how?
[08:25:00] <joannac> your first 2 elements in the boxes array are identical
[08:25:52] <cxz> Pretend we only have 1 of those elements
[08:26:01] <cxz> I've modified that schema from our real data
[08:26:38] <joannac> can you actually have multiple items?
[08:26:49] <cxz> no i dont think so
[08:27:05] <joannac> what's the difference between an item and a box then?
[08:27:46] <cxz> well we have many boxes
[08:29:26] <joannac> okay, well
[08:29:35] <joannac> each box should be a document. a top level document
[08:29:50] <cxz> i know we should change our schema, but we really cant right now
[08:30:13] <joannac> what does "we can't right now" mean? You can't actually query your database the way you want
[08:30:21] <cxz> i know
[08:30:24] <cxz> we're stuck
[08:31:13] <joannac> If you can't query the database, why have a database at all?
[08:33:01] <cxz> jsut stuck right now, thats all
[08:33:05] <cxz> trying to figure it out
[08:33:24] <joannac> okay
[08:33:37] <joannac> well, once you're ready to change the schema, let me know
[08:33:44] <cxz> ok
[09:06:07] <a|3xx> hi
[09:07:49] <a|3xx> i am trying to compile something and i am getting the following types of errors, would anybody happen to know what could be wrong? /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/../../../../lib64/libmongoclient.a(sasl_client_session.o): In function `mongo::(anonymous namespace)::_mongoInitializerFunction_SaslClientContext(mongo::InitializerContext*)': (.text+0xbd7): undefined reference to `sasl_errstring'
[09:56:58] <Ravenhearty> where can i talk with someone that devs the official mongodb C# driver
[10:46:27] <joannac> Ravenhearty: what's the question?
[10:47:40] <joannac> Ravenhearty: if it's along the lines of "I found a bug / I have a feature request", file a ticket in the CSHARP project at jira.mongodb.org
[11:03:05] <Ravenhearty> well i'd like to ask why the BsonIgnoreExtraElements attribute doesn't work in child classes
[11:03:14] <Ravenhearty> i mean it should work
[11:16:16] <StephenLynx> classes?
[11:25:47] <nyssa> hello everyone. this is kind of a last-resort help call for me, as I am completely stuck with restoring a database after some files got corrupted. after hours of trying to fix it on the go, I finally did a mongodump with repair and then mogorestore to a replicaset - but now any query to one of the databases results in "assertion 13051 tailable cursor requested on non capped collection" - the problem is - the error pops up on a primary in a replicaset, and if
[11:25:49] <nyssa> I move it to a standalone problem still persists. apparetnly it has something to do with master-slave replication but this is not the case here and I was wondering if any of you ran into something remotely similar?
[11:53:37] <cxz> joannac: we were able to solve our problem using $and: [{'name': }, {'boxes.details: elemmatch...}]
[12:40:54] <salty-horse> Hi. I noticed a regression after upgrading from 2.4 to 2.6. I have count() queries on _id prefix (using "_id": /^prefix/) that don't seem to be indexOnly. They did before. now they end up taking hours. Could not find any documentation about it online. has anyone experienced this?
[12:50:51] <StephenLynx> there was a dude with a problem regarding indexes these days
[12:51:07] <StephenLynx> and it ended up being a bug marked to be solved in 2.9 or so
[12:51:19] <StephenLynx> I don't know it it was the same issue that you are experiencing.
[12:55:10] <salty-horse> StephenLynx, I remember there's a problem with using covered indexes in aggregations. do you mean that?
[12:55:25] <StephenLynx> don't remember.
[12:55:29] <salty-horse> will check. thanks
[12:55:47] <salty-horse> btw, since 2.8 is now 3.0. 2.9 is what? :)
[12:55:49] <StephenLynx> I jsut remember it was something regarding indexes and caused a huge loss of performance.
[12:56:11] <StephenLynx> I don't know, I just remember it was what it said on jigra.
[12:56:32] <StephenLynx> maybe its like 31/02 :v
[13:15:55] <salty-horse> StephenLynx, perhaps this is my issue: http://stackoverflow.com/a/24958834/16081 :(
[13:40:52] <kexmex> hi
[13:42:12] <kexmex> i signed up to MMS and I got an email from what seems to be a Sales person, asking if I need any help. I asked him a couple of simple questions that I need answered, but instead of getting the answer, he asked me if I looked at enterprise offerings. I replied saying that we are not, yet, after which he just told me to f-off by saying "i advise you to use community resources and local meetups"--
[13:42:41] <kexmex> that's nasty
[13:43:36] <StephenLynx> what is mms?
[13:43:38] <cheeser> he didn't tell you fuck off. he's pointing you to the community resources.
[13:43:41] <cheeser> mms.mongodb.com
[13:43:48] <joannac> kexmex: are you comfortable PMing me their name?
[13:43:53] <kexmex> yes
[13:44:00] <kexmex> yea he didnt lieraly say f-off
[13:44:06] <kexmex> but he pissed me off a lot
[13:44:29] <StephenLynx> lol he was the one that contacted you
[13:44:38] <StephenLynx> right?
[13:45:21] <kexmex> yes
[13:45:24] <kexmex> and asked to help me
[13:45:38] <StephenLynx> yeah, he fucked up.
[13:46:44] <StephenLynx> if hes willing to help only paying customers, he shouldn't just contact anyone that signed up for the free trial.
[13:46:45] <kexmex> like
[13:46:45] <kexmex> if he's offering help
[13:46:45] <kexmex> he should be ready to help regardles
[13:46:49] <kexmex> take the hit or whatever
[13:46:53] <StephenLynx> aye
[13:46:56] <kexmex> call tech buddies and answer my 2 simple questions
[13:47:26] <StephenLynx> welp, I guess new he is at least get called on his bullshit by someone.
[13:47:30] <StephenLynx> now he is*
[13:47:50] <StephenLynx> salespeople are a blight on tech
[13:50:45] <Freman> amen
[13:52:00] <Freman> var f = 1; while (f > 0) { f = 0; db.log_outputs.aggregate([{$sort: {date: -1}}, {$limit: 900000}, {$group : {_id: {task_id: "$task_id", failed: { $ifNull: [ "$failed", false ] }}, logs: {$push: "$_id"}}}, {$project: {_id: 1, logs: 1, count: {$size: '$logs'}}}, {$match: {count: {$gt: 50}}}], {allowDiskUse: true}).forEach(function(doc) {doc.logs.splice(0, 50); db.log_outputs.remove({_id: {$in : doc.logs}}); f = 1}); }
[13:52:00] <Freman> hehehe
[13:52:42] <Freman> ( I know that looks terrible, but originally I was actually using f for counting, before I just went "meh" now it's boolean )
[13:53:35] <Freman> now only 3k log entries stored vs 13457639273
[13:53:47] <Freman> for one process
[13:56:02] <StephenLynx> I just console.log anything :v
[13:56:14] <StephenLynx> and then I read on /var/log/messages
[15:10:20] <RoyK> hi all. I have mongodb 2.6.5 on this RHEL7 server, and after a yum update (to kill the ghost bug), mongod just died. This is what the logs say when I try to start it. http://paste.debian.net/142667/
[15:11:42] <bin> hey guys
[15:12:16] <bin> what is the most convenient way to check if mongod is up from java?
[15:15:18] <kali> what about a command "ping" on the "test" database ?
[15:16:38] <cheeser> or a listCollections call
[15:17:00] <cheeser> i don't think the driver exposes the ping command but you can call getCollectionsName() on DB
[15:18:17] <kali> no, i checked, it's not exposed.
[15:18:32] <kali> but ping will require slightly less work from the server...
[15:19:15] <kali> and you can get it through the generic command method
[15:21:28] <bin> cheeser: i call getDatabaseNames(0
[15:21:36] <bin> but with 2.12.5
[15:21:47] <bin> There is something that slows the check ...
[15:21:57] <cheeser> ?
[15:22:23] <bin> I have 3 address and i check them in cycle.. everytime second or third host check blocks longer
[15:23:36] <bin> I mean it blocks longer on the address which is down
[15:23:43] <bin> is there any timeout where i can set to the client ?
[15:25:29] <FunnyLookinHat> Anyone have a preferred third-party vendor for hosted databases? I'm looking at objectrocket ( from Rackspace ) and compose.io - but suggestions would be nice
[15:27:33] <bin> done,fixed ... default timeout is 10s woah
[16:11:39] <kexmex> so how to make mms setup a replicaset that communicates via ssl?
[16:27:46] <kali> FunnyLookinHat: mongolab and mongohub are popular choices
[16:28:02] <kali> no. not mongohub
[16:28:07] <kali> mongohq, sorry
[16:28:24] <kali> (which is now compose.io)
[16:28:36] <FunnyLookinHat> Ah right - compose.io - I've seen them too
[16:49:56] <lilgiant> hi all of you, short question: is it a good idea to use mongodb to store images (size not bigger then 1M) or is it better to keep image files in a separate folder?
[16:56:53] <neo_44> lilgiant: s3 bucket with reference in Mongo
[16:56:56] <neo_44> that is how I store them
[17:00:09] <GothAlice> FunnyLookinHat: Be aware that you pay a premium for "cloud" hosted services. If I were to host my own data at compose.io it'd cost me half a million a month.
[17:02:30] <lilgiant> neo_44: thanks, I think i will also just store a reference to the images in mongodb, althougt i like the idea to have data centralized in a db.
[17:04:08] <neo_44> lilgiant: images never belong in a database.....on actionable, query-able data. All the metadata for an image should be stored...just not the image. This is just a firm opinion that I have.
[17:04:11] <GothAlice> lilgiant: I take both approaches, my main personal dataset is mostly "attachments" in GridFS, some of my client applications need the images to be more easily web-accessible, so we S3.
[17:04:20] <StephenLynx> is better to keep files off the db because you can stream them from disk easily
[17:04:22] <GothAlice> neo_44: GridFS isn't really "database" per-se.
[17:05:24] <GothAlice> Also, running my own iSCSI RAID arrays and 1U boxen to run MongoDB GridFS on top of is actually less expensive than "cloud file" storage like S3, which tend to run around $1000/TiB/yr: http://cl.ly/image/0t1x2Q2L1u0E
[17:05:47] <blizzow> I have a bunch of large (+80GB, 150mil records) collections I'm trying to mongorestore into a sharded replicated 2.6 cluster. I'm getting a whopping 700 inserts/second. How the heck can I speed this thing up?
[17:06:04] <lilgiant> convinced :)
[17:06:42] <FunnyLookinHat> GothAlice, dang - that must be a lot of data... :) I'm starting out with < 200 MB generated data per month
[17:06:52] <GothAlice> FunnyLookinHat: 25, almost 26 TiB.
[17:07:00] <FunnyLookinHat> So at that level I'll take the easy on-boarding and just export my data elsewhere if necessary :)
[17:07:03] <FunnyLookinHat> Oy
[17:07:06] <GothAlice> ;)
[17:07:10] <FunnyLookinHat> How much of that is indexes? ;)
[17:07:43] <GothAlice> Very little; my primary metadata collection explicitly doesn't have indexes on things. There's too much data, not even the indexes would fit in RAM. (It's a write-heavy dataset, and read performance isn't a concern at all.)
[17:08:07] <GothAlice> About 1.2% of the storage size is actually nothing but the field names in each record.
[17:08:31] <FunnyLookinHat> Wow - sounds like a fun project :D
[17:08:39] <GothAlice> "Exocortex" — even has a fun name. :D
[17:11:58] <neo_44> GothAlice: if reading the data isn't a concern...then why are you collecting it in mongo? Why not just to S3?
[17:18:54] <lilgiant> one more question: i read a little bit about the mongo-connector. is it a better approach to use it to keep mongo and solr in sync or is it better to do this in application level?
[17:19:46] <lilgiant> write now i use mongoose post save function
[17:25:12] <GothAlice> neo_44: Because for the cost of S3 I could replace every drive in all three of my arrays every month.
[17:28:37] <GothAlice> neo_44: I also have better service-level reliability than S3, for example, the primary has an uptime of 1048 days… so far. ;)
[17:29:19] <neo_44> doesn't s3 have an uptime forever?
[17:29:29] <neo_44> just have to wait for the data to distribute, right?
[17:30:01] <GothAlice> neo_44: Tell that most major sites disappearing when AWS has "issues". ;)
[17:30:08] <neo_44> lol
[17:31:00] <GothAlice> The last time I used AWS they locked up three zones worth of EBS volumes which caused cascading cross-zone failures despite SLA guarantees that separating your infrastructure across zones would isolate you from issues. Took three days of reverse-engineering InnoDB structures to get our data back. ¬_¬
[17:33:05] <GothAlice> neo_44: We learned from that incident. Now our application DB hosts don't have, use, or need permanent storage at all. We have a "no moving parts" and "automated recovery from armageddon" philosophy for infrastructure. :)
[17:45:43] <igreer_> There shouldn't be any issue with using mongodb to process transactions (ONLY MONGODB), correct? I am new to the language and wanted to ask this simple question before digging too deep into the code.
[17:46:06] <cheeser> depends on what you mean by transactions
[17:46:06] <GothAlice> igreer_: MongoDB itself has no concept of "transactions".
[17:46:15] <cheeser> GothAlice: not across documents anyway. ;)
[17:46:32] <GothAlice> Atomic operations != transactions.
[17:46:43] <igreer_> But what about two-phase commits?
[17:46:59] <cheeser> GothAlice: they're functionally the same
[17:47:34] <GothAlice> igreer_: Technically possible. Generally I advise people who need transactional safety to use a database that intentionally does transactions. Postgres or Maria. ;)
[17:47:46] <igreer_> I know. I know.
[17:47:58] <igreer_> I'm just having issues with PostgreSQL on my local machine
[17:48:19] <GothAlice> igreer_: http://postgresapp.com if you're on Mac.
[17:48:26] <igreer_> Got that
[17:48:57] <igreer_> the actual problem is with a rails application
[17:52:15] <GothAlice> A ha.
[18:04:13] <StephenLynx> yeah, don't think mongo and nosql is a panacea.
[18:04:25] <StephenLynx> there are clear uses for both relational and nosql dbs.
[18:06:04] <NoOutlet> Hello everyone.
[18:53:04] <macbroadcast> hello all, i need to create a config file for mongodb on ubuntu , what are the correct file permissions ?
[18:53:34] <neo_44> StephenLynx: there are clear uses for s3,relational, document, key value, and other ways to store data. Right tool for the problem I always say :)
[18:53:53] <neo_44> macbroadcast: do you have a mongodb user?
[18:53:58] <macbroadcast> yes
[18:57:39] <neo_44> I believe you can just chown mongodb:mongodb the config file
[18:58:00] <neo_44> macbroadcast: depends on what your config file is doing
[18:58:03] <macbroadcast> the package i want to install , just use mongodb commands and does not specify a config file so guess i need todo it by myself
[18:58:16] <neo_44> what package?
[18:58:47] <neo_44> macbroadcast: if you 'yum install' mongodb it will create the config file and init script for you
[18:59:01] <neo_44> the config file doesn't need specific permissions...just the init file
[19:00:18] <macbroadcast> i see, sounds easier
[19:10:35] <theRoUS> any mongoid users/mavenin in here?
[21:03:24] <disappeared> don’t know crap about mongo…is there a way i can export a valid json file using the mognodbexport utility…i noticed the export is not valid
[21:09:00] <cheeser> not valid how?
[21:10:43] <disappeared> cheeser: it doesn’t wrap the entire resul tin an array and seperate by comma
[21:10:47] <disappeared> into*
[21:11:12] <cheeser> oh, right. one document per line, yes?
[21:11:22] <disappeared> yes
[21:11:24] <roadrunneratwast> in a MEAN stack application, where is the best place to prepopulate the database? I am using meanjs
[21:11:42] <cheeser> disappeared: pass --jsonArray to mongoexport
[21:11:48] <cheeser> also, mongoexport --help
[21:11:52] <disappeared> cheeser: WHaaaaaat…i didn’t even see that
[21:11:55] <disappeared> thanks
[21:11:57] <cheeser> np
[21:12:14] <disappeared> haha here i am looking at unanswered stackoverflow threads
[21:12:16] <disappeared> with same issue
[21:14:36] <cheeser> url?
[21:19:02] <netameta_> Robo mongo has an issue with updateing records/collection its seem
[21:35:23] <neo_44> netameta_: Robo mongo? use the shell
[21:35:54] <netameta_> yea robomonogo is a client
[21:38:13] <NyB__> Hi, is there a document or something that describes the new Java driver API?
[21:40:40] <cheeser> besides the javadoc?
[21:41:23] <NyB__> cheeser: yes, I was hoping for something that would point out the differences, esp. for those like me who have been using custon decoders etc.
[21:41:30] <cheeser> https://github.com/mongodb/mongo-java-driver/tree/3.0.x
[22:02:53] <hahuang61> hmmm i've got a 3 node replica set, 2 of the nodes are unhealthy/unreachable, but I can mongo to them... what can I do to fix this situation?
[22:04:28] <joannac> fix in what sense? have a primary again?
[22:06:45] <hahuang61> joannac: yes, and have the other 2 nodes be reachable in the replica set
[22:06:58] <netameta_> How can i refresh a schema ? i forgot adding unique index to one field - and now it wont let me. i assume there is a way to reset the schema ?
[22:07:16] <hahuang61> netameta_: there's not a schema.
[22:07:25] <netameta_> what you mean ?
[22:07:27] <joannac> hahuang61: you've connected to all 3 nodes successfully? and all3 say "I can't see the other 2"?
[22:07:42] <joannac> netameta_: sounds like you're asking about mongoose?
[22:08:06] <hahuang61> joannac: I can connect to all 3 from all 3, but the rs.status() from each of the 3 say something different. in two of them, the other 2 are unreachable, and in 1, only 1 is unreachable.
[22:08:12] <hahuang61> joannac: kind of a strange situation
[22:08:12] <netameta_> joannace yea
[22:08:46] <joannac> netameta_: mongoosejs.com, under support "irc: #mongoosejs on freenode"
[22:09:15] <netameta_> will do thanks joannac
[22:10:53] <joannac> hahuang61: log into a server where it thinks 2 are unreachable. connect a mongo shell. type rs.conf() and note what the "host" says
[22:11:09] <joannac> exit the mongo shell, and then "mongo HOST"
[22:11:27] <joannac> where HOST is exactly what the entry in rs.conf() was
[22:12:14] <hahuang61> joannac: which host? There's a host for each of the machines
[22:12:42] <joannac> one of the ones that's not the one you're on :p
[22:13:15] <hahuang61> joannac: you want me to mongo --host `hostname` where hostname is what I got fromt he conf
[22:13:21] <joannac> yes
[22:13:31] <hahuang61> joannac: yeah, I did all of them from every node to every node
[22:13:35] <hahuang61> joannac: they are all working okay
[22:13:51] <joannac> but rs.status() is saying unreachable?
[22:14:21] <disappeared> I’m looking at a json export of a collection and i see these $date properties. What format are these in? they aren’t in unix time..but it is a long integer of some type?
[22:14:39] <joannac> frombably millis since epoch
[22:14:48] <hahuang61> joannac: yeah, rs.status says unreachable
[22:14:48] <joannac> fromably? probably.
[22:14:58] <joannac> hahuang61: pastebin the logs somewhere
[22:15:36] <hahuang61> joannac: UGHHHH hostname doesn't match.
[22:15:51] <hahuang61> joannac: wtf, someone fucked with the config init script...?
[22:16:03] <joannac> ...so when I told you to type the hostname exactly, you didn't? :p
[22:16:43] <hahuang61> joannac: it did
[22:16:52] <hahuang61> joannac: it's talkinb aout the mongod --replSet flag
[22:17:02] <hahuang61> joannac: the hostnames are still the same
[22:17:22] <joannac> oh okay. It's just that you said hostname didn't match
[22:17:37] <hahuang61> joannac: ah sorry, I meant replset name
[22:17:40] <hahuang61> my bad :p
[22:17:43] <joannac> okay, well I presume you know how to fix that
[22:18:24] <joannac> you'd be surprised how often in support, you tell someone to do something in quite clear terms, and they go and do some other thing :p
[22:18:58] <hahuang61> joannac: I believe it :) I've been on both sides of that :D, sorry I mistyped earlier
[22:19:16] <hahuang61> joannac: and yup, I think I'll just have to reapply puppet across these nodes, and restart the mongods
[22:19:29] <hahuang61> joannac: This happened cuz we were applying latest patches to ghost to those machines and rebooted them
[22:19:39] <hahuang61> something must have changed the init.d scripts
[22:19:54] <disappeared> joannac: thank you
[22:20:13] <hahuang61> joannac: thanks for helping me debug also :) really appreciate it
[22:20:18] <joannac> hahuang61: weird. something to investigate later perhaps, after the replset is back up
[22:20:26] <hahuang61> joannac: yeah, definitely
[22:20:29] <joannac> hahuang61: no probs, happy to help. glad it's sorted
[22:24:14] <calmbird> Hi! Do you know any good way to update all array object values? Because db.something.update({}, {$set:{myArray.$.value: 10}, {multiple:true}}) updates only first array element
[22:25:05] <neo_44> calmbird: you have to specify each element of the array
[22:25:13] <neo_44> the '$' says only the first
[22:25:26] <calmbird> any good way?
[22:25:40] <joannac> I will preface this by saying, if you find yourself doing this, your array elements should be documents in their own right"
[22:25:43] <calmbird> like $all or something? can't find anything
[22:25:56] <neo_44> possible if you remove the $...if not you have to set for eadh element
[22:26:34] <calmbird> @joannac why it's nosql, can have array subdocuments right?
[22:27:17] <neo_44> calmbird: you should use nested sub documents...not arrays for this....probably
[22:27:25] <neo_44> depends on the access pattern and needs
[22:29:51] <joannac> calmbird: yes, but if you are selecting multiple items in an array, they are important enough to be documents in their own right
[22:30:41] <calmbird> I have user.competitionsData array, and inside every competition I have weekly and monthly
[22:30:53] <calmbird> cups collected
[22:31:20] <calmbird> so every week I have to set this weekly to 0 in all user.competitionsData
[22:32:09] <joannac> ?
[22:32:57] <calmbird> http://gyazo.com/1b536fa544efa5616749e1eb560a12f8
[22:34:37] <joannac> post on GG or SO and the community can design a better schema for you
[22:35:05] <calmbird> you mean create another collection?
[22:36:46] <calmbird> well don't have time for changing schema now, recieived this server after another dev, and can't find proper query to update all array elements in one querry
[22:37:52] <calmbird> just found one answer that might to work, use each on cursor to get every document, modify on server app, then update
[22:38:20] <calmbird> but i don't think its good for like 100k + records :P
[22:40:56] <calmbird> btw, mongo should have that feature, otherwise I would have to build relations on server side, its not good well thanx for help will try to figure it out
[22:49:15] <joannac> as a general comment, you're the second person who's come in saying "we can't change our schema".
[22:49:54] <joannac> if the schema doesn't work for your use case, you're just delaying the inevitable
[22:51:20] <calmbird> @joannac yes, because everything is working, but weekly monthly reset, I would have to change modules etc, It would take whole day, and I don't have that time. mby I will try what neo_44 said, to change from array to object
[22:52:43] <calmbird> @joannac or just use, update(... myArray.0.weekly: 0), update(... myArray.1.weekly: 0) etc I know its not good, but can't see any other ways
[22:53:29] <calmbird> well huge operation once per week, I hope mongo db will handle it :P
[22:54:35] <joannac> 100k records is not huge. index it
[22:55:13] <calmbird> or mby just find().each(function() { modify, update }) I don't know :P Will probably just run another process for that reset.
[22:55:48] <calmbird> @joannac ok I will index it, could you please tell me what you think is better: find().each() or update(... myArray.0.weekly: 0) way?
[22:56:01] <joannac> actually, depending on the ratio of find / modify, maybe don't index it
[22:57:10] <calmbird> well this operation will be done only once per week, so... I can sugest changing schema, but they want it working now :P small company you know, don't even have time to write good tests :P
[22:59:35] <calmbird> https://jira.mongodb.org/browse/SERVER-1243 there is ticket for that all documents in array, but low priority bleh ;)
[23:11:24] <hahuang61> joannac: finally all fixed. Somehow the puppet manifests regressed.
[23:16:28] <calmbird> joannac: thank you very much for your help, good night