PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 22nd of October, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:56:55] <GothAlice> joannac: The reason I ask is that my old Okapi implementation stored the weighted, normalized keywords in a separate collection from the records themselves. Ranking happens on that separate collection, and moving the data in is disadvantageous.
[01:58:06] <GothAlice> Which sucks. T_T
[02:28:49] <GothAlice> My kingdom for an ORDER BY `id` IN […]. This Python generator is getting pretty funky.
[02:31:40] <speaker1234> is it safe to use _id as a universe wide record identifier
[02:32:45] <GothAlice> speaker1234: Yes. In fact, MongoDB assumes that it is, automatically providing a unique index across it.
[02:33:30] <GothAlice> speaker1234: ObjectIDs are very carefully designed to avoid the problem Twitter encountered when they became popular: how do you have multiple machines generate unique IDs without collisions? Twitter solved this by creating an entire service platform to do nothing but generate new auto-increment IDs.
[02:33:31] <speaker1234> so if a worker be was to change state, all it needs is the _id field
[02:34:30] <GothAlice> MongoDB solved this by providing IDs that combine several different values that are gaurenteed to be unique together. ObjectID combines a UNIX timestamp, a per-process auto-increment ID, and some form of host/process identifier.
[02:34:43] <GothAlice> speaker1234: So yes, all you need is the _id. ;)
[02:35:03] <speaker1234> it's late.I am being really focused and ignoring the names the dishes are calling me
[02:35:34] <GothAlice> If we cared what dishes thought of us, we probably wouldn't have invented them. ;^P
[02:36:52] <speaker1234> in looking at the dump, I see the _id is a dictionary of 1 entry. Do I pass that in as an atomic object?
[02:37:23] <GothAlice> speaker1234: At what point are you seeing that result? After the aggregation?
[02:37:40] <GothAlice> (And what are you $group'ing on?)
[02:37:51] <speaker1234> { "_id" : { "$oid" : "5446e725124c580d8ee05d39" },
[02:38:04] <speaker1234> from mongo export
[02:38:11] <GothAlice> Right, that's a JSON dump.
[02:38:21] <GothAlice> Technically all you need from that is 5446e725124c580d8ee05d39.
[02:38:36] <speaker1234> match on _id
[02:38:49] <GothAlice> *nods*
[02:39:02] <GothAlice> 5446e725124c580d8ee05d39 is the hex representation of the ObjectID.
[02:39:16] <speaker1234> k I got to get some sleep. chant later
[02:40:18] <speaker1234> I have spent the next two days justifying to one customer why ordinary IT best practices are not up for discussion and why all the efforts I've been doing for the past six months have been headed in that direction
[02:40:35] <GothAlice> speaker1234: You generated that ObjectID on 2014-10-21 at 23:07:17 :)
[02:40:55] <GothAlice> speaker1234: Good luck with that.
[02:41:03] <GothAlice> It's an argument I have with my managers on a regular basis. ;)
[02:41:22] <speaker1234> it's crazy. I've been struggling for six months trying to get the server room switched into hot/cold aisles to gain a little bit more efficiency because they won't install any additional air-conditioning capacity
[02:41:36] <GothAlice> …
[02:41:39] <speaker1234> only now, I'm told that we don't need to bother because it's wintertime and the will not overheat
[02:41:45] <GothAlice> … … …
[02:41:56] <GothAlice> That's right. An ellipsis of ellipsese.
[02:42:18] <GothAlice> It's going to require effort at *some point* to correct that…
[02:42:21] <speaker1234> there 10 Gb network is flaky because it was installed wrong. I get a vendor in, it cost $10,000 to do it right so no, instead we are getting Chinese made cat 6A cables and stringing them over the ceiling
[02:42:32] <GothAlice> T_T
[02:42:46] <GothAlice> Okay. I'm about to start phoning up Domokun for a hit job on some kittens.
[02:42:52] <speaker1234> if I didn't need the money I would've left a long time ago. I really need a new client
[02:43:09] <GothAlice> Being able to fire a client is a luxury most of us cannot afford, sadly.
[02:43:30] <speaker1234> I have a second client, the one I'm doing this work for which may be better but I really don't know. They're probably just crazy in a different way
[02:43:45] <speaker1234> on the plus side, the CTO is very understanding of what I'm fighting for, he has the same problems and is making the same amount of headway
[02:43:50] <GothAlice> All clients are crazy. Period. ;^)
[02:44:11] <speaker1234> that's what consultants charge so much. Is to put up with their insanity
[02:44:37] <GothAlice> 99% of my consulting work is convincing the client of the Right Solution™. There's usually an insane dichotomy between what they *want* and what they actually *need*—it's always an uphill battle.
[02:45:34] <GothAlice> (I.e. at work it's taken nearly a year to convince them that the scope of the current project was too small. At least now my managers are super-excited for what we will be able to offer the clients in the near future in accordance with my original vision, not theirs. ;)
[02:45:35] <speaker1234> I have been accused of trying to build a Michelin one star restaurant of an IT shop. In reality, I'm just trying to keep people from getting food poisoning
[02:45:46] <GothAlice> lol
[02:45:49] <GothAlice> That's a great way of putting it.
[02:46:01] <speaker1234> that's going up on Google plus
[02:46:07] <GothAlice> Damn straight.
[02:47:36] <speaker1234> okay, posted. :-)
[02:47:52] <GothAlice> #talesfromit
[02:48:23] <speaker1234> irc? twitter hash tag?
[02:48:29] <GothAlice> Hashtag.
[02:48:57] <GothAlice> Somewhat related to http://www.reddit.com/r/talesfromtechsupport ;)
[02:49:10] <speaker1234> added the hashtag
[02:50:01] <GothAlice> ^_^
[02:50:23] <GothAlice> This is what #mongodb (IRC channel) has become. Discussing social media postings about IT/consulting irks. ;)
[02:51:24] <speaker1234> it's late. I'm sharing divorce recovery stories with a friend by texting and the dishes are still calling me nasty names
[02:52:28] <speaker1234> then I will let you plug away. I want to get up early so I can get a head start on traffic and the excrement storm for over the next three days (two days documenting, Friday morning final execution)
[02:52:43] <GothAlice> For some reason this week has gotten off to an insanely productive start with the release of the updated (and jumping from <100 to >600 tests) marrow.schema. Of course I've gotten to bed at 5am the last three days. T_T
[02:53:06] <speaker1234> if I can get this second customer up and running, then I'll have enough money coming in to tell them to go find a trained monkey
[02:53:49] <speaker1234> nite
[02:53:52] <GothAlice> Have a great one!
[03:20:54] <darkblue_b> hi all - is there a current FAQ for setting up mongodb for distributed daa analysis ?
[03:20:59] <darkblue_b> data
[03:30:29] <darkblue_b> I see this https://www.digitalocean.com/community/tutorials/how-to-set-up-a-scalable-mongodb-database
[03:31:37] <shoerain> GothAlice: late reply, I guess, I just wouldn't mind modelling my migration script after someone else's well designed one
[03:33:20] <GothAlice> shoerain: All migration scripts boil down to "execute this series of linear steps to upgrade" and "execute these other series of linear steps to downgrade". At work I don't even use a migration framework, they're just bare functions at the module level in Python. (Though I do have a package of nothing but migration modules with incrementing integer prefixes to track them.)
[03:34:06] <GothAlice> from m001_bootstrap import upgrade; upgrade() # in a production shell
[03:34:20] <joannac> darkblue_b: I don't know what that means. you either want a replica set, or a sharded cluster
[03:34:32] <darkblue_b> sharded cluster
[03:34:51] <joannac> http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/
[03:34:55] <GothAlice> darkblue_b: sharded cluster, for sure. Sharding allows for easier parallelization of your queries.
[03:35:17] <darkblue_b> a client wants to throw genomic data and do cluster analysis.. I assume they have thought about it.. I am new to this but can admin
[03:36:49] <GothAlice> shoerain: And for the *vast majority of changes* MongoDB reuqires no migrations whatsoever. Obsoleted attributes eventually decay (if deletes are regular), new attributes automatically get assigned on next update, etc. (This just *forces* you to handle both foo: {exists: false} and foo: '' cases on a regular basis, for example.)
[03:37:35] <joannac> darkblue_b: "I assume they have thought about it" is a bad assumption to make
[03:37:40] <joannac> figure out what they want
[03:37:48] <darkblue_b> heh
[03:37:55] <GothAlice> darkblue_b: I never assume that, and have yet to be proven wrong with my particular set of clients.
[03:38:12] <GothAlice> ;^)
[03:45:42] <darkblue_b> .. and this https://bugsnag.com/blog/mongo-sharding
[04:01:51] <darkblue_b> if I make a master mongodb node using say Debian/Ubuntu.. then put names like sh1.myweb.com sh2.myweb.com sh3.myweb.com in the /etc/hosts file.. is that enough ?
[04:02:06] <darkblue_b> or do I have to have the shard nodes resolve from the outside
[04:07:19] <darkblue_b> biab
[04:29:03] <doug1> ?quit
[05:11:42] <darkblue_b> any thoughts on this ? http://rockmongo.com/
[05:12:05] <darkblue_b> dns server config for a collection of VMs ?
[05:12:43] <darkblue_b> NFS to mount disks to distribute data ?
[05:27:37] <Boomtime> darkblue_b: what do you mean by "NFS to mount disks to distribute data"
[05:28:11] <Boomtime> you can use replica-sets to distribute complete copies of data and achieve high-availability
[07:28:11] <josias> Hi i have a problem with the mongodb-basic-php driver (http://docs.mongodb.org/ecosystem/drivers/php/)(System: windows7, WAMP with php 5.5.15). I did everything as written, but there occures an error at apache startup: PHP Warning: PHP Startup: in Unknown on line 0
[07:28:44] <josias> and: PHP Fatal error: Class 'MongoClient' not found in [...]
[07:38:41] <josias> no one here?
[07:41:46] <josias> is this a dead chat? with so many Zomies?
[08:04:45] <josias> \quit ################### this chanel is dead #################
[08:04:49] <baconsau> hi, I'm trying to read the oplog.rs from another Repl and after that I apply these code to local Repl
[08:05:06] <baconsau> I do it with nodejs
[08:05:40] <baconsau> everything is ok except that If the connect to remote Replicate is broken
[08:05:53] <baconsau> I must restart the code
[08:06:14] <baconsau> and dont know how to find the last 'ts'
[08:06:34] <baconsau> I dont want to write it to file or put to another db
[08:06:49] <baconsau> can you give me a suggest
[10:43:26] <ut2k3> hi guys our mongodb brings this error: Assertion: 10334:BSONObj size: -286331154 (0xEEEEEEEE) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: ObjectId('5439ac05a0e4b6b32bb923c9')
[10:43:30] <ut2k3> how can we fix this?
[10:45:19] <ut2k3> should db.repairDatabase() solve the thing?
[12:07:15] <johey_> Is the following possible? I have documents containing a 'nodes' key with a list of values, for instance {nodes: {'a', 'b', 'c', 'd'}}, {nodes: {'c', 'a'}} and {nodes: {'b', 'd'}}. Now I want to find all documents containing two given nodes, but only those that comes in the same order as in the query. For instance ('a','c') would match the first document but not the second, as the second is in the wrong order, and ('b','d') would find the first and last document
[12:37:39] <ut2k3> db.repairDatabase() fails with Assertion: 10334:BSONObj size: -286331154 (0xEEEEEEEE) is invalid. Size must be between 0 and 16793600(16MB) First element: _id:. Is there a way to fix it?
[13:49:07] <edrocks> is there any way to get a field from the item your remove with pull?
[14:41:23] <izolate> how do you restart the mongod process?
[14:45:13] <jmfurlott> I am working with another developer, about to build a site using Mongo for data, and we were wondering the best way to share it with each other whenever we pull/push code. Does anyone have any suggestions?
[14:45:34] <Forest> Hello. Can anyone tell me how do i calculate the size of documents i want to insert in batch into mongodb? I am using Node js.
[14:46:45] <Forest> My problem is that i create either too small arrays so the import process takes a lot of time or i create too large batches so only some of the documents get actually inserted.
[14:47:54] <izolate> everybody's too busy to help here it seems
[14:48:36] <Forest> izolate: do you know the answer?
[14:50:21] <izolate> Sorry, no. I have very little experience with the tool
[14:51:10] <izolate> you may be better off asking stackoverflow
[14:57:38] <GothAlice> izolate: Your question will depend on the platform and distribution. On most Linuxes, sudo /etc/init.d/mongodb restart
[15:00:50] <Forest> so no one actually knows how to calculate the size of that BSON?
[15:01:12] <GothAlice> Forest: http://bsonspec.org/implementations.html — grab a library, figure it out. ;)
[15:01:22] <Forest> this issue drives me crazy,MongoDB has 16 MB limit and i just cant send array that long cause i dont know its size,its ridiculous
[15:01:58] <Forest> GothAlice: trzing that for an hour,can you help me please?
[15:02:07] <GothAlice> Forest: Search on this page for "Array": http://bsonspec.org/spec.html
[15:02:37] <GothAlice> Simple arrays are sent as mapings of atomically incrementing integers to the respective array element.
[15:03:55] <Forest> GothAlice: jesus i dont understand that,i have element like {"id":item.id,"loc" : [item.lon,item.lat],tags:item.tags} where tags is another dictionary of key value pairs
[15:05:55] <izolate> thanks GothAlice :)
[15:05:58] <GothAlice> item.id is an ObjectID?
[15:06:03] <GothAlice> Forest: I can work that out for you.
[15:06:05] <pithagora> guys, any idea why on debian 7 64 bit even i do aptitude install mongodb-org=2.6.1 i get the 2.6.5 installed
[15:06:27] <Forest> GothAlice: yes,it is
[15:06:49] <GothAlice> Thus the storage required for an array of two integer elements representing lat/long is: "\x04" + cstring() + int32(2) + "\x100\x00" + int32(lat) + "\x101\x00" + int32(long) + "\x00"
[15:07:11] <GothAlice> The cstring() there would be "loc" in your case.
[15:07:40] <GothAlice> That makes the grand total: 1 + 4 + 4 + 3 + 4 + 3 + 4 + 1 = 24 bytes for the lat/long by itself.
[15:08:01] <GothAlice> http://bsonspec.org/spec.html < you can do these calculations yourself.
[15:08:02] <GothAlice> :)
[15:10:07] <GothAlice> Basically [foo, bar] turns into {0: foo, 1: bar} when stored. BSON cheats. ;)
[15:11:30] <GothAlice> pithagora: That may be a question better asked in the debian chat; seems like confusion around package management.
[15:13:24] <Forest> GothAlice: i dont understand how are you counting it,can zou explain again please? I still dont understand how would i calculate for tags,because it can be empty or contain variable number of string:string pairs
[15:14:11] <GothAlice> Forest: Are you familiar with C structures at all? That'll determine the approach I take in breaking down what I previously wrote.
[15:16:17] <Forest> GothAlice: no,unfortunatelz i am not :(
[15:16:35] <GothAlice> Forest: Cool. Time to get the learning hats on. :) http://bsonspec.org/spec.html is what is referred to as BNF (Backus–Naur Form) notation. It describes the valid ways low-level chunks of data can be combined.
[15:17:17] <GothAlice> At the top it describes how large certain named types are: byte is one byte (obviously), int32 is 32 bits or 4 bytes, etc., etc.
[15:18:50] <GothAlice> The first thing in the BNF (document ::=) is the top level of any BSON "document". The first four bytes are a number (int32) telling you how long the whole BSON document is. (This is put up front so that networks can read four bytes quickly to figure out how much more data to expect.) Then there's an "e_list" of aadditional "element"s and a terminating null byte (\x00).
[15:19:18] <GothAlice> The smallest possible BSON document (an empty one) is then: int32(1) + "\x00" or five bytes.
[15:20:46] <Forest> GothAlice: can you help me get just the first four bytes? i am desperate
[15:20:54] <GothAlice> Forest: Language?
[15:21:06] <Forest> GothAlice: javascript
[15:23:03] <GothAlice> Balls. I'm not familiar with how to do binary encoded integer conversions in JS.
[15:23:13] <GothAlice> .substr() to get the first four bytes, though.
[15:23:42] <stefandxm> there are proper integer packages for javascript
[15:23:49] <stefandxm> i think it might be usefull
[15:24:46] <Forest> stefandxm: Can you help me determine the size of BSON object there? Me and my friend are brain-fried already. Please have mercy!
[15:25:08] <stefandxm> bson is a trivial format
[15:25:24] <stefandxm> i am sorry to say that if you cant figure that out you wont get far ;)
[15:26:35] <Forest> stefandxm: jesus,i need it for mz bachelor thesis where solve different kind of things and i just cant proceed if zou dont help me. Even if i could calculate it manually the number of key:tags in tags can vary,so it wouldnt help me much
[15:26:37] <stefandxm> but, maybe there is a js driver that can do this for you
[15:26:48] <stefandxm> what school?
[15:27:11] <stefandxm> i mean like really. how can you go for a bachelor and dont know how to parse a file format / basic programming
[15:27:23] <stefandxm> sorry for being arrogant. but you really need to sit down and think this through ;)
[15:28:06] <Forest> stefandxm: why are you such an asshole and dont want to help me? i just dont get this,sorry :(
[15:28:17] <GothAlice> Forest: Because it varies, there is no way I can give you anything approaching a "correct" answer, or any kind of answer to the question "how big is this data".
[15:28:31] <GothAlice> Forest: Your question is fundamentally flawed.
[15:29:17] <GothAlice> However, I am in the process of figuring out how to *get* the answer in Python, which can be a start for using other BSON libraries to work out your answer.
[15:29:38] <stefandxm> the bson library in c++ is trivial
[15:29:49] <stefandxm> aswell
[15:30:04] <stefandxm> but if you dont have a library for bson you need to just think of it logically
[15:30:06] <stefandxm> its not too hard
[15:31:33] <GothAlice> Understanding BNF is critical when trying to examine low-level data structures. Almost everything uses this notation, even whole programming languages. (Python's syntax is defined in BNF, which it uses when compiling the interpreter.)
[15:35:41] <GothAlice> {'id': ObjectId('5447cc8d6f692b0b641ecd85'), 'loc': [42.7263, -12.2634], 'tags': ['foo', 'bar', 'baz']} -> 97 bytes.
[15:36:10] <GothAlice> import bson; print(len(bson.BSON.encode(bson.SON({'id': ObjectId('5447cc8d6f692b0b641ecd85'), 'loc': [42.7263, -12.2634], 'tags': ['foo', 'bar', 'baz']}))))
[15:36:41] <GothAlice> That was harder than I expected. The bson lib in Python is incredibly non-Pythonic. XD
[15:37:03] <stefandxm> in c++ the bson library is rather ok
[15:37:08] <stefandxm> but documentation is a joke
[15:37:14] <stefandxm> also they have made "tutorial macros"
[15:37:17] <GothAlice> …
[15:37:28] <stefandxm> so there is a tutorial where they do Or(a,b) etc
[15:37:32] <Forest> GothAlice: I would be grateful if you sacrifice yourself and figure it out in node please .
[15:37:41] <stefandxm> but in reality they have just made just that Or(a,b,c,d,e,f,g,h)
[15:37:42] <stefandxm> no more
[15:37:42] <GothAlice> Forest: Alas, I'm not being paid to do that. ;P
[15:37:48] <stefandxm> and no And()
[15:37:50] <stefandxm> etc
[15:37:52] <stefandxm> all for the tutorial
[15:38:05] <stefandxm> so tutorial code compiles, but its not how you can use it :D
[15:38:14] <GothAlice> stefandxm: That's terrible.
[15:38:19] <stefandxm> yes it is
[15:38:26] <stefandxm> ive mailed mongodb about it but no response at all
[15:38:34] <stefandxm> they are very happy CALLING me tho
[15:38:37] <stefandxm> about everything else
[15:38:54] <stefandxm> but actually helping with that the c++ driver is insane is not on their agenda ;-)
[15:39:02] <stefandxm> better add transactions to mongodb instead
[15:39:12] <GothAlice> I think I've only ever interacted with MongoDB, Inc. folks at conferences and online. ^_^;
[15:39:38] <GothAlice> stefandxm: Wait, MongoDB doesn't have transactions?! /facetious ;^P
[15:44:28] <stefandxm> GothAlice: yeah. went to mongodb world in new york
[15:44:36] <stefandxm> GothAlice: after that i were a very interesting person for them
[15:44:36] <nand0p> hey yall
[15:44:45] <stefandxm> Goopyo: as long as i dont want help with difficult stuff ;-)
[15:44:58] <Goopyo> lol
[15:44:58] <darkblue_b> hi all - day2 of the mongo adventure.. got a turnkey linux and booted it to poke around
[15:45:03] <nand0p> anyone see this on mms agent ?
[15:45:04] <nand0p> [.doom] [cm/main/cm.go:main:312] [15:35:26.152] Repanicking goroutine panic: runtime error: index out of range
[15:50:03] <GothAlice> darkblue_b: Argh. I figured out why my exocortex was inaccessible yesterday, but completely forgot to fix it.
[15:50:11] <darkblue_b> heh
[15:50:21] <GothAlice> darkblue_b: No cluster automation script for you! ;P
[15:51:10] <GothAlice> Eh; I'll have lunch in an hour. I'll pop home and fix it then. *melts in the face of cuteness*
[15:54:10] <GothAlice> darkblue_b: (Came down to hostname mismatches screwing up rDNS-SD. My computer was "Lucifer (6)" for a while there! ;)
[15:54:31] <darkblue_b> ah - I was wondering about the DNS angle...
[15:54:41] <GothAlice> Yeah. DNS is the bane of my existence. ;)
[15:54:52] <darkblue_b> not shocking...
[15:55:13] <darkblue_b> I have managed to get by without my own DNS.. I suspect those days are numbered
[15:55:49] <GothAlice> I have to call one company to update our external DNS, and another company to update the internal DNS. My first deployment at work went great, except that nobody in-house could connect. XD
[15:56:16] <docdoak> How can i do self reference to a field? I need to a increase a specific value by 10%
[15:56:19] <darkblue_b> "networking is easy - it works every time.. except the first time"
[15:57:11] <docdoak> in other languages it would be salary *= 1.1
[15:57:16] <docdoak> is there an equivalent?
[15:57:51] <GothAlice> docdoak: http://docs.mongodb.org/manual/reference/operator/update/mul/#up._S_mul
[15:57:53] <GothAlice> Yes there is. :)
[15:58:03] <docdoak> fantastic GothAlice
[15:58:04] <docdoak> thanks
[15:58:09] <GothAlice> It never hurts to help.
[15:58:12] <docdoak> my google searches weren't getting me anywhere
[15:58:59] <docdoak> db.employees.update( {name: "Tonja Baldner"}, {$mul: {salary: 1.1}} ) does that look correct?
[15:59:22] <GothAlice> {salary: {$mul: 1.1}}
[15:59:26] <GothAlice> I suppose either might work.
[15:59:31] <docdoak> ah ok thanks
[15:59:39] <GothAlice> (I typically stick to field: {operator: value}.)
[15:59:55] <docdoak> yeah, the docs say the other way I think
[16:00:07] <docdoak> but I like yours better
[16:00:58] <GothAlice> Hmm; seems like I've been bitten by my abstractions again. {operator: {field: value, …}, …} is the correct form.
[16:01:55] <GothAlice> Yup; the abstraction I'm using pivots the update actions.
[16:03:09] <GothAlice> docdoak: ^^
[16:13:07] <circ-user-9MhZT> hi circ-user-54jZm
[16:22:15] <doug1> How do I remove a shard?
[16:22:26] <doug1> Or, do I have to build the entire cluster from scratch for the Nth time?
[16:23:45] <GothAlice> AFIK you demote the shard, let it rebalance, then remove it. http://docs.mongodb.org/manual/tutorial/remove-shards-from-cluster/
[16:23:54] <GothAlice> doug1: ^
[16:24:33] <doug1> GothAlice: That's the doc I was following. I get "{ "ok" : 0, "errmsg" : "Can't remove last shard" }"
[16:24:43] <GothAlice> … if it's your last shard, no, you can't remove it.
[16:24:49] <doug1> seriously?
[16:25:00] <doug1> so i have to blow the whole cluster away and start from scratch?
[16:25:22] <GothAlice> Why are you even trying to do that? If there are no other shards, you're effectively running non-sharded, and can build up from there.
[16:25:47] <doug1> Because I need to get back to a clean state, so I can test the automation of adding shards
[16:26:44] <GothAlice> Wanting "clean slate" and not wanting to "build the entire cluster from scratch" are mutually exclusive.
[16:26:59] <doug1> If there's a shard there, it's not clean
[16:27:06] <GothAlice> No.
[16:27:10] <doug1> it started without a shard
[16:27:18] <doug1> therefore, no shard == clean
[16:27:28] <GothAlice> No. No shard = dead, waiting for the first shard.
[16:27:42] <doug1> Ok, that's fine. How do I get back to dead state then?
[16:27:53] <GothAlice> Trying to preserve the mongos setup while removing all mongod instnaces backing it is not how you produce a clean slate. You nuke the cluster and start again.
[16:28:05] <doug1> GothAlice: That's insane
[16:28:14] <GothAlice> doug1: Nuking it from orbit is the only way to be sure.
[16:28:55] <doug1> No wonder I've been at this for 4 months. I should email my boss and tell him the only way to reset my test is to rebuild all 10 instances from scratch in an attempt to explain why this is taking so long
[16:29:05] <GothAlice> And with proper automation, it's not insane at all. Spinning up a new cluster for me takes about 30 seconds to reach an operational state. (Admittedly my OS boot times to all-services-running is about two seconds…)
[16:29:26] <Dioxy> Hi all
[16:29:30] <GothAlice> Dioxy: Howdy!
[16:29:37] <Dioxy> How's it going?
[16:29:44] <doug1> if your using gold images, it may take 30s, but you've just moved the effort into the image maintenance
[16:29:54] <GothAlice> doug1: You don't have to nuke the VMs, just the data stores for the mongo[s/d] processes. You can fire them right back up afterwords.
[16:30:16] <doug1> GothAlice: You mean blow away /var/lib/mongodb/* ?
[16:30:51] <doug1> Where tho? My data store on the config server is empty
[16:31:04] <Dioxy> So DBs in Mongo are basically flat JSON files, with MONGO handling functions to parse the JSON?
[16:31:29] <Dioxy> I'm coming from an SQL background
[16:31:42] <doug1> Or, do I have to blow away /var/lib/mongodb/* on all 6 data nodes ?
[16:32:05] <GothAlice> Dioxy: It's a bit more complicated than that. Flat is not a word I'd use to describe Mongo—it's far more appropriate to the "spreadsheet" style of SQL. Mongo also uses BSON, a binary form of JSON that avoids much of the JavaScript legacy (like 48-bit integer accuracy).
[16:32:48] <GothAlice> doug1: The error you gave says all of those other nodes (except one) have already been removed. Nuking those data directories resets MongoDB to a "clean slate".
[16:33:07] <GothAlice> doug1: Just make sure you have backups of your data. ;)
[16:33:46] <GothAlice> doug1: Also, the config server must be storing its data somewhere. Check the mongos config file and command-line arguments to determine the data directory.
[16:35:17] <GothAlice> Dioxy: MongoDB also provides extremely rich querying of nested structures, and the ability to manipulate nested data, too. (I.e. you can atomically append to a list, retrieve only a range of elements or elements that match a query from a list, etc.)
[16:39:20] <Dioxy> GothAlice - I assume you build query strings like any other DB, fire the command and Mongo and iterate the results, then write back to the DB?
[16:40:13] <GothAlice> Dioxy: You build query mappings / dictionaries. I.e. {age: {$gt: 18, $lt: 40}, occupation: {$in: ['IT', 'Accounting']}}
[16:40:57] <GothAlice> Dioxy: And like SQL databases, your MongoDB client driver will expose a cursor to you, which you can limit/skip/iterate/etc. on.
[16:41:24] <Dioxy> GothAlice Interesting
[16:41:28] <GothAlice> Dioxy: http://docs.mongodb.org/manual/reference/sql-comparison/
[16:42:47] <GothAlice> Dioxy: MongoDB also includes map/reduce support and something called "aggregation pipelines". These let you build some pretty wild queries that can be easily parallelized across a cluster.
[16:43:53] <darkblue_b> hmm
[16:43:56] <GothAlice> Dioxy: https://gist.github.com/amcgregor/7bb6f20d2b454753f4f7 is a comparison between two approaches to generate the same results. (One aggregate, one map/reduce.)
[16:44:45] <GothAlice> (ignore the "ohgods.py" file on that; we abstracted aggregate queries for storage within MongoDB here at work. ;)
[16:46:30] <docdoak> I want to copy one "row" from a database into another database (removing the initial)
[16:46:34] <docdoak> how would I do that?
[16:47:04] <GothAlice> You would find() the original record, insert() it into the other database, check for success (people often forget to do this ;), then remove the original record.
[16:47:26] <docdoak> but cant i do that in one step? db.pastemployees.insert( {db.employees.find( {name: "Raoul Dewan"} ) } ) isn't working for me
[16:47:51] <docdoak> brand new to mongodb obviously
[16:48:02] <GothAlice> db.employees.find() will return a cursor, not the first object. You want findOne in that instance, I believe. (And you don't need to wrap it in {}… the result will already be a dictionary.)
[16:48:46] <GothAlice> You'll still want to check for success, there. And using a temporary vairable will allow you to catch other errors more easily (and understandably), such as the original record not existing.
[16:49:31] <docdoak> thanks
[16:49:35] <GothAlice> No worries. :)
[16:50:24] <GothAlice> docdoak: https://github.com/marrow/marrow.github.com/wiki/Zen#the-zen-of-python
[16:50:53] <docdoak> heh, nice
[16:51:30] <Dioxy> GothAlice is there a PDF of the manual?
[16:51:59] <GothAlice> Dioxy: Click the "options" bar in the bottom left, then click PDF. :)
[16:53:01] <Dioxy> GothAlice Thanks :)
[16:53:21] <doug1> What does this mean? Could not connect to database: 'localhost:27017', reason Failed to connect to a master node at localhost:27017
[16:54:12] <ejb> A $geoNear aggregate returns a doc with 'result' (array of docs) and 'ok' attributes. I want to $project 'result' and do further operations but the projection only includes _ids. What gives?
[16:54:42] <GothAlice> doug1: Sounds like you're connecting to a disconnected slave. (I.e. the slave can't find its master and is unwilling or unable to promote itself to a new master.)
[16:55:10] <GothAlice> ejb: You'll need to include the fields you wish to preserve in each $project operation. {fieldname: 1} is enough to say "hey, keep that!"
[16:55:11] <ejb> I've also tried to unwind result
[16:55:40] <ejb> GothAlice: yeah, so shouldn't {$project: {result: 1}} include everything in the array?
[16:55:44] <GothAlice> ejb: (excluding _id, which is kept by default) The result of an aggregate (when not $out'ed to a collection) is to return a single record with a nested "results" list of the actual output of the aggregate.
[16:56:07] <GothAlice> (So basically you can ignore the top level of the returned document; MongoDB creates that automatically and it's not accessible from within the pipeline.)
[16:56:08] <tehpwnz> if i do a project that uses mongodb as the db, do i have to draw things such as DFD's and ERD's?
[16:56:18] <ejb> GothAlice: oh, ok. So I shouldn't try to unwind result
[16:56:31] <GothAlice> ejb: Nope; "results" is a lie. Much like the cake.
[16:56:36] <ejb> heh
[16:56:38] <ejb> ten four
[16:56:46] <ejb> << noob
[16:57:06] <GothAlice> tehpwnz: I'm not sure what you mean; I haven't *had* to "draw" anything in six years. ;)
[16:57:33] <ejb> GothAlice: can I pass the result of $map to $sum or $add?
[16:57:57] <tehpwnz> GothAlice: i mean, i have to write documentation and the like. i'm a coll student. Arent DFD and ERD for schema based dbs?
[16:59:14] <GothAlice> ejb: $map returns the results. I.e. {$project: {foo: {$map: {...}}}} will result in a new field named "foo" being added to each record.
[16:59:54] <ejb> GothAlice: Yeah, I want to project something like { $sub: [ { $add: tagScoresFromMapOp }, distanceScoreFromGeoNear ] }
[16:59:59] <GothAlice> tehpwnz: Ah, I haven't been in uni even longer than that. They may be, but I'm not familiar with those acronyms. For the most part I use schema abstractions on top of MongoDB that can produce diagrams for me automatically. ;)
[17:01:38] <GothAlice> ejb: You need to $unwind on "tagScoresFromMapOp" (whatever you assign that field to) then $group and $sum them, not try to abuse $add (which won't work the way you've got it).
[17:02:00] <GothAlice> $group on _id, $sum as part of the aggregate.
[17:02:40] <GothAlice> tehpwnz: like http://f.cl.ly/items/0Y422V0v281b0w0v0I2q/model.pdf (cower in fear at that model)
[17:02:42] <Dioxy> Thanks GothAlice
[17:02:44] <Dioxy> T'ra
[17:03:09] <ejb> GothAlice: is there a way to pass all fields through project and group?
[17:03:18] <ejb> GothAlice: wait, nvm nvm
[17:03:50] <GothAlice> ejb: ^_^
[17:04:14] <tehpwnz> GothAlice: :O
[17:04:17] <GothAlice> (You almost never actually want to do that for performance reasons.)
[17:05:02] <GothAlice> tehpwnz: It gets worse. http://f.cl.ly/items/003g0F212R2t3N1p3D1x/match-new.pdf is the call graph processing data on that model.
[17:05:20] <GothAlice> tehpwnz: "it's wider than my mom"
[17:06:17] <docdoak> 2014-10-22T13:04:23.116-0400 findAndModifyFailed failed: { "ok" : 0, "errmsg" : "remove and returnNew can't co-exist" } at src/mongo/shell/collection.js:614
[17:06:21] <docdoak> anything I can do about that?
[17:06:29] <docdoak> I wanted to update it before i returned it
[17:06:31] <docdoak> but also remove it
[17:06:46] <docdoak> seems like ill just have to remove it after, right?
[17:06:51] <GothAlice> Very much so.
[17:07:16] <GothAlice> I'm not actually quite sure how you're getting that error message. What's the line you were trying to execute?
[17:08:48] <ejb> GothAlice: I have a dict of tag -> weight. How do I assign a variable inside of a $let vars block to that weight? vars: { tagWeight: '$$tagWeights[$$tag]' } ?
[17:09:07] <ut2k3_> Hi guys. How is it possible to select and insert documents from a collection of DB1 to another Collection Named DB2?
[17:09:32] <GothAlice> ejb: {vars: {weights: yourWeightsDict}}
[17:09:49] <GothAlice> ejb: You don't need anything fancy to pass in a dictionary mapping strings to integers. ;)
[17:10:00] <docdoak> GothAlice: <ut2k3_> Hi guys. How is it possible to select and insert documents from a collection of DB1 to another Collection Named DB
[17:10:02] <ejb> GothAlice: kk, thanks
[17:10:03] <docdoak> whoops
[17:10:22] <kali> ut2k3_: db.db1.find(...).foreach(function(doc) {db.db.save(doc)})
[17:10:23] <GothAlice> ut2k3_: Open two connections, find() on a collection from one, insert() on a collection in the other.
[17:10:28] <docdoak> db.pastemployees.insert( db.employees.findAndModify( { query: {name: "Raoul Dewan"} , update: { $set: {departyear: 2014}}, new: true, remove: true} ) )
[17:10:33] <GothAlice> kali: Or that from the shell. ;)
[17:10:45] <GothAlice> docdoak: wut
[17:11:12] <docdoak> haha I was trying to remove it and update it in one line
[17:11:18] <docdoak> because I wanted to return it
[17:11:27] <docdoak> the insert statement isnt there, but thats part of it too
[17:11:39] <GothAlice> docdoak: Yeah. Stop that. ;)
[17:12:08] <docdoak> haha ok
[17:12:13] <docdoak> figured I'd give it a whirl
[17:12:14] <GothAlice> Or, y'know, you could use a temporary variable, no need to update and create a new record in the old collection…
[17:12:33] <ut2k3_> kali: So this copies only inside a single database? I need to copy from db1.collection_x to db2.collection_z? Is that possible?
[17:12:35] <GothAlice> foo = db.employees.find({…}) ; foo.departyear = 2014; db.pastemployees.insert(foo)
[17:13:14] <GothAlice> docdoak: From the Zen: simple is better than complex, sparse is better than dense, and readability counts. ;)
[17:13:24] <kali> ut2k3_: yes. it can even works cross server actually (with a little bit more work)
[17:13:40] <kali> ut2k3_: just use db.getSiblingDB["otherdb"]
[17:13:54] <ut2k3_> thank you
[17:14:07] <ejb> GothAlice: I'm getting: invalid operator $let. Is it a fairly new op?
[17:15:07] <GothAlice> ejb: Are you trying to use it as a top-level aggregate stage, or as part of a $project?
[17:15:17] <ejb> GothAlice: part of project
[17:15:54] <GothAlice> Huh; http://docs.mongodb.org/v2.6/reference/operator/aggregation/let/ doesn't actually mention the version it was added.
[17:17:50] <docdoak> it was multi line and commented, so i didnt think it was too complex
[17:18:03] <docdoak> but I take your point
[17:19:09] <docdoak> One more question, is there a good way to comment a text file of mongo commands? I notice the hash tag doesnt work
[17:19:30] <docdoak> I saw that if you wanted to comment really complex things you can use $comment, but I'd love to just add a word or two
[17:19:36] <docdoak> but still have the code executable
[17:22:27] <kali> docdoak: you need to rely on the langage you're using to write the commands
[17:22:51] <docdoak> well I was just hoping to make it easy for my prof to copy and paste it
[17:23:09] <docdoak> hrm
[17:23:35] <docdoak> oh, so mongo uses javascript as default
[17:23:38] <docdoak> which is //, right?
[17:23:45] <kali> docdoak: if we assume you're using javascript intended to be copy/pasted in the shell, then /* */ or // will work
[17:23:56] <docdoak> thanks for the insight kali
[17:24:00] <docdoak> I didn't even think about it like that
[17:24:29] <ut2k3_> kali: The problem is => TypeError: Cannot read property 'log' of undefined http://nopaste.info/d7055ef423.html
[17:26:17] <kali> getSisterDB
[17:26:28] <kali> ut2k3_: no longer getSiblingDB
[17:26:41] <kali> ut2k3_: and it's () not []
[17:26:49] <ut2k3_> ok thanks
[17:26:54] <kali> ut2k3_: db.help() will help
[17:29:49] <doug1> So... where does one add the admin user? config server? data node? router? where?
[17:32:06] <ut2k3_> thank you kali it works! i appreciate your help
[17:33:33] <GothAlice> doug1: Config server and the primary, AFIK. Primary will replicate it to the secondaries but the config server doesn't get synchronized updates of credentials.
[17:42:27] <doug1> GothAlice: Oh. Someone... Joanna last night said mongos and mongod.
[17:42:52] <doug1> Is this documented somewhere?
[17:43:43] <GothAlice> doug1: mongos would be the "config server" and mongod would be the "primary" in my own description. ;)
[17:44:11] <GothAlice> doug1: http://docs.mongodb.org/manual/tutorial/enable-authentication-in-sharded-cluster/
[17:44:36] <GothAlice> (I assume replication, thus the 'primary' bit.)
[17:45:04] <doug1> oh for the love of god. trying to add admin user "not master"
[17:45:35] <GothAlice> doug1: I'm suggesting adding the admin user to the "master" (primary) of the replica set. God doesn't need to enter into it.
[17:45:54] <doug1> GothAlice: well that doc says "On each mongos and mongod " ... but that failed with 'not master' when I tried to add the admin user on an arbitrary mongod
[17:46:24] <GothAlice> doug1: Yes, because you *also* have replication. You only need to add the user to the *primary* mongod instance, not the replication secondaries/slaves.
[17:46:28] <doug1> There is no master yet. I need to add the admin user before configuring the replicaset, so that automation, which authenticates as the admin user CAN set up the replicaset
[17:46:42] <doug1> GothAlice: there are no secondaries yet. There isn't even a master
[17:47:17] <mike_edmr> GothAlice: do you think there's any credibility to a higher default write concern or changes in write concern behavior from 2.4->2.6 resulting in not just slower query performance, but much lower concurrency / higher write lock percentage ?
[17:47:30] <GothAlice> doug1: The error message you are recieving indicates that the mongod process you were trying to use *thinks* it's configured in a replica set, but hasn't successfully found its master yet.
[17:47:42] <doug1> let me check then
[17:48:00] <mike_edmr> it doesnt seem to follow that changing the write concern would make locks be held for longer.. but maybe i am misunderstanding their relationship
[17:48:28] <GothAlice> mike_edmr: It'd certainly result in an overall decrease in performance and throughput, and depending on the speed of your disks could effect write lock percentage.
[17:49:00] <mike_edmr> is the write lock held until the write concern is met?
[17:49:18] <mike_edmr> as opposed to being held for the same time, regardless of write concern?
[17:49:53] <GothAlice> I.e. instead of inserting a record in memory and marking the page as dirty (for eventual reclamation to disk, with a write lock surrounding that periodic task) the higher default write concern may be forcing an immediate sync() to disk of dirty pages, which can be slow.
[17:50:01] <doug1> GothAlice: Logged into all 3 data nodes. None have a replicaset configured.
[17:50:27] <doug1> So it appears that you can't add the admin user until you have master. This isn't documented as far as I can see
[17:50:47] <GothAlice> doug1: Are you able to 'mongo test' and 'db.foo.insert({})' successfully on those servers?
[17:51:01] <GothAlice> (When there's a problem, it's a good idea to simplify down to the bare essentials to help identify what's really going on.)
[17:51:13] <doug1> GothAliceL: I'd rather not.... because then I'll have to blow away and start from scratch again won't I?
[17:51:29] <GothAlice> doug1: No, not really. The 'test' database is meant for things like this.
[17:51:33] <mike_edmr> GothAlice: would that be true going from {w: 1} to {w: majority}, or does it strictly pertain to the fsync/journal options
[17:51:36] <doug1> ok hang on
[17:51:43] <mike_edmr> 'majority'
[17:51:44] <GothAlice> doug1: If you care, you can db.dropDatabase() when you're done. ;)
[17:52:23] <doug1> The insert gets me "WriteResult({ "writeError" : { "code" : undefined, "errmsg" : "not master" } })"
[17:52:51] <GothAlice> mike_edmr: I believe write lock is about fsync/journalling. Upgrading to replication concerns (i.e. majority) would have a substantial impact on performance (network roundtrip latency and waiting on remote fsync()), but not locking.
[17:53:23] <GothAlice> doug1: I now suspect you have supplied command-line or configuration file options to your mongod processes that is convincing them they are in a replica set. --keyFile?
[17:53:28] <mike_edmr> GothAlice: thanks, that's helpful. I need to read more about the behavior around writing to disk.
[17:53:52] <doug1> GothAlice: if I connect to the console of them and do "rs.status()" I get nothing
[17:54:18] <GothAlice> doug1: The set might not be configured, but mongod knows you want it. Could you pastebin your mongod.conf and mongod command-line?
[17:55:08] <doug1> GothAlice: http://pastebin.com/QTtT2Hzn
[17:55:26] <GothAlice> Yup, keyFile in the config.
[17:55:32] <GothAlice> It knows you're trying to set up a replica set.
[17:55:43] <GothAlice> Also replSet
[17:55:47] <doug1> of course...
[17:56:07] <doug1> but an rs.status() returns nothing
[17:56:17] <doug1> bottom line is... how do I add the &^%%^*()) admin user?
[17:58:24] <GothAlice> Either add the user to whichever server will become the primary before adding any keyFile/replSet options (i.e. in standalone mode) or enable replication *first*, then add the admin user to the primary.
[17:58:45] <doug1> oh good grief
[17:59:13] <GothAlice> You can automate this by spinning up mongod manually (i.e. with a bootstrap config file that omits the bad rules), issuing the appropriate commands to populate /var/lib/mongodb, then starting up the real system-wide daemon after shutting your bootstrap one down.
[18:02:28] <doug1> I can't fathom how people got the chef cookbook to work. Even the folks from mongo who are around the corner came out and said 'oh yeah we know people that are using it'. FFS How?
[18:02:44] <Thomas__> Hi! Is it possible to rename a database by just renaming all files and the folder (folderperdb is on)
[18:02:44] <GothAlice> I can't fathom people using chef. ;)
[18:02:59] <doug1> I can't fathom deploying config by hand
[18:03:52] <GothAlice> Thomas__: No. Don't do that. Use db.copyDatabase('old', 'new') followed by use old_database; db.dropDatabase()
[18:04:48] <GothAlice> doug1: I use templates and my automation is predominantly BASH scripts executed as Git hooks or RPC.
[18:05:12] <doug1> GothAlice: I have dozens of categories of servers to look after, not just mongo
[18:05:38] <Thomas__> GothAlice: so its not possible? Because the database size is about 500gb?
[18:06:08] <Thomas__> It also contains some corruption that cannot fixed by db.repairDatabase()
[18:06:09] <GothAlice> Thomas__: There would very likely be hanging internal references to the old name contained within the files.
[18:06:09] <doug1> GothAlice: As far as your automation suggestion, I'll defer because I'm bound to hit some other corner case
[18:07:18] <doug1> lunch and valium
[18:07:40] <Thomas__> hm damn, the problem is db.repairDabase is not working and mongdump dies after a certain amount of records
[18:07:57] <GothAlice> Thomas__: Have you tried spinning up mongod with --repair set?
[18:08:23] <GothAlice> I.e. "offline repair" mode?
[18:08:50] <Thomas__> GothAlice: --repair means that mongodb tries to repair _all_ db right? The Problem is that there is another (alright) Database containing about 2TB of data
[18:08:52] <doug1> hang on hang on.... why can't I add the admin user to mongos? isn't that the point?
[18:09:54] <doug1> I would tend to think that not being able to add users via the router would be a serious architectural flaw
[18:10:08] <GothAlice> Thomas__: In situations like that I rsync the data to a staging machine, shut down, re-rsync (to catch up minor differences more quickly), start it back up, then work in staging to not disrupt other things. Then you'd be able to delete the other databases and --repair just the one you want.
[18:10:20] <doug1> even the docs say "While connected to a mongos, add t"
[18:10:32] <doug1> so... I will try and add to the mongos...
[18:10:34] <GothAlice> doug1: You misunderstand how the router routes things.
[18:10:44] <Thomas__> Ok
[18:10:45] <doug1> which I am sure will fail for yet to be determined reason
[18:10:52] <doug1> GothAlice: That's what the docs say
[18:10:54] <GothAlice> doug1: If you `mongo` into the router, the system collections are router-local, not part of the rest of the cluster.
[18:11:04] <doug1> "While connected to a mongos, add the first administrative user and then add subsequent users. See Create a User Administrator."
[18:11:29] <doug1> seems pretty clear
[18:11:47] <doug1> I think when i tried that yesterday it complained I didn't have a database yet
[18:11:52] <GothAlice> DB-local users will propagate, yes. System-level will not.
[18:12:05] <doug1> That's not what the docs say
[18:13:23] <doug1> biab
[18:17:58] <GothAlice> doug1: http://www.irclogger.com/.mongodb/2014-10-21#1413934651 — joannac was helpful yesterday, and the answer hasn't changed since then.
[18:18:08] <GothAlice> (Yay for IRC loggers reducing duplication of effort. ;)
[18:33:40] <doug1> ok, how do I make a given node in a replicaset primary?
[18:33:48] <doug1> s/make/pin|force/
[18:35:13] <mike_edmr> doug1: idk but i'm not sure you should need to
[18:35:27] <mike_edmr> let it elect a primary
[18:35:29] <doug1> mike_edmr: well, i need to add the admin user to the primary only apparently.
[18:35:48] <mike_edmr> rs.status() to find the primary
[18:36:00] <doug1> mike_edmr: i need to pin it before hand
[18:50:18] <doug1> How do I add the admin user to the mongos if the auth option isn't allowed in the config file?
[18:50:50] <GothAlice> doug1: http://www.irclogger.com/.mongodb/2014-10-21#1413933356
[18:52:54] <doug1> Isn't that just used for internal com between nodes?
[18:53:03] <doug1> How's that relate to the admin user?
[18:53:12] <doug1> I thought the localhost exception had nothing to do with that?
[18:53:29] <GothAlice> doug1: It relates to authentication. It's how authenticated mongos/mongod servers securely communicate, and enabling it enables auth. (That's what it's for.)
[18:53:45] <doug1> so it's not related at all to the admin user?
[18:53:45] <GothAlice> The localhost exception is to allow you to connect w/o authentication if you are a server-local user. (I.e. have SSH access.)
[18:54:06] <GothAlice> User? No. Auth? Yes.
[18:55:10] <doug1> I'd like to know why the cookbook automatically adds the auth option to the config file then when you set the keyfile.
[18:55:47] <GothAlice> Because it's broken.
[18:55:56] <doug1> Sound plausible
[18:55:59] <GothAlice> ;)
[18:56:11] <doug1> I'd like to know why the mongodb folks recommended it then
[18:57:00] <GothAlice> Oh, because you're looking at a mongod config, not a mongos one. (Unless you can point me at the URL giving bad info.)
[18:57:34] <doug1> The cookbook does the same on both
[18:58:24] <GothAlice> Then that sounds like a bug in the docs, right there.
[18:58:43] <doug1> Docs are terrible
[18:59:48] <GothAlice> Well, no. These docs are actually remarkably good. uWSGI's docs are terrible.
[19:00:03] <doug1> I'm talking about the chef cookbook docs
[19:00:11] <GothAlice> Ah, oh yeah. Chef.
[19:01:03] <GothAlice> http://uwsgi-docs.readthedocs.org/en/latest/Zerg.html < for multiple years they never documented what --zerg did. --help would list it as a valid option, but it would omit a description for it.
[19:02:01] <doug1> heh
[19:02:11] <doug1> sound like a prank..
[19:02:14] <GothAlice> (Turns out to be extremely, extremely useful, actually.)
[19:02:43] <doug1> used to be if you booted solaris and had no network, system wouldn't boot. Was like that for a long time too
[19:03:12] <GothAlice> At least you don't have to enter time on CMOS-cleared machines in microfortnights any more. ;)
[19:04:01] <GothAlice> (https://en.wikipedia.org/wiki/FFF_system#Notable_multiples_and_derived_units)
[19:16:06] <axitkhurana> My mongodb log files are about 5GB for a day, but most of it is ^@^@^@^@^@^@^@^@^@^ characters, what can the issue be?
[19:16:28] <GothAlice> axitkhurana: Those are null characters.
[19:16:46] <GothAlice> axitkhurana: Which makes me ask: which log file are you talking about? (Path?)
[19:17:08] <axitkhurana> GothAlice: /var/log/mongodb.log
[19:17:25] <axitkhurana> * /var/log/mongodb/mongodb.log
[19:18:07] <GothAlice> That file should not contain nulls. It might potentially contain nulls if you have slow query logging enabled and your query contains nulls.
[19:20:25] <GothAlice> axitkhurana: My production MongoDB logs average about 50MB per day.
[19:20:42] <GothAlice> (With peaks of maybe 200-300MB.)
[19:20:47] <axitkhurana> GothAlice: The first line has a lot of nulls (that increases the size of the file) and the rest of the file has the actual logs.
[19:21:13] <GothAlice> Could you pastebin your mongod.conf and mongod command line?
[19:22:25] <riso> I want to make an update to add a new string variable to each document in mongo, how can I do this?
[19:23:19] <axitkhurana> GothAlice: mongod.conf http://pastebin.com/2gkWi3XY
[19:25:06] <axitkhurana> GothAlice: Would mongod commandline run if it's already running? (this is my production system)
[19:25:58] <GothAlice> axitkhurana: Uhm, do a "ps aux | grep mongod", find the process ID, and "cat /proc/<pid>/cmdline"
[19:26:41] <axitkhurana> GothAlice: /usr/bin/mongod--config/etc/mongod.conf
[19:26:51] <GothAlice> axitkhurana: Hmm. No logappend, so mongod is actually emitting those nulls on each startup? That's weeeeird, Jerry.
[19:27:03] <axitkhurana> GothAlice: Appreciate your help and patience here.
[19:27:26] <GothAlice> axitkhurana: Apologies, but at this point I'm stumped. I have no idea what might be padding that file to such extremes.
[19:27:40] <axitkhurana> GothAlice: no problem, thanks for looking into it.
[19:31:19] <GothAlice> Failing to solve a problem makes me sad. :( You might try enabling logappend, manually clearing the old logfile, and restarting the service at some opportune time that'll impact fewer users. You *really* should set up logrotate on that file to keep it to sane sizes, with managable history.
[19:31:54] <GothAlice> (logrotate being a third-party utility on Linux, not a configuration option to mongod)
[19:34:31] <Streemo> I plan on using mongoDB and nodejs. Does anyone know any good *Book* or in depth guide that goes over using mongodb and how it works
[19:35:05] <axitkhurana> GothAlice: we're using logrotate to compress daily log files, that's why we hadn't noticed the huge log files till now, the compressed ones were very small.
[19:35:14] <daidoji> Streemo: the docs are pretty good for Mongo. A document store is a pretty simple device too
[19:35:33] <daidoji> Streemo: just think of it as a giant key-value store with BSON limitations for the keys and values
[19:35:45] <axitkhurana> GothAlice: I should mention we use replica sets (master slave like config), if it can affect the log files in some way
[19:36:15] <GothAlice> riso: Oops, your question got buried a bit there. db.collection.update({}, {$set: {newvar: somevalue}})
[19:36:29] <riso> GothAlice: thanks
[19:37:47] <Streemo> daidoji the docs do look pretty good, im liking the visual aids. do you recommend reading the entire thing?
[19:37:53] <GothAlice> Streemo: The documentation available on mongodb.org and mongodb.com include both high-level overviews in addition to the technical details, tutorials, whitepapers, links to blog posts, etc.
[19:37:59] <riso> GothAlice: what is the false and true here? db.Collection.update({}, { $set : { "myfield" : "x" } }, false, true)
[19:38:48] <GothAlice> riso: db.somecollection.help() in the interactive mongo shell. (somecollection should exist)
[19:39:03] <daidoji> Streemo: I read the entire thing cause thats kinda what I do, but you can skip a lot of the admin and sharding parts probably until you need fancy things like that
[19:39:10] <GothAlice> upsert=false, multi=true
[19:39:15] <Streemo> http://docs.mongodb.org/manual/ this
[19:39:24] <daidoji> Streemo: thats the one
[19:39:49] <Streemo> difference between .com and .org
[19:39:50] <Streemo> ?
[19:40:20] <GothAlice> Streemo: .com is commercial, .org is the open-source side
[19:40:30] <Streemo> but the docs are the same
[19:40:30] <GothAlice> The .com has whitepapers and other things about how people are using MongoDB.
[19:40:36] <Streemo> oh ok
[19:41:16] <GothAlice> Streemo: I appreciate the "consume all available material" approach to learning. I once read (and memorized) the 600 page "HTTP: The Definitive Guide" in one night and wrote a HTTP/1.1 server (in 171 Python opcodes) the next day. :D
[19:41:36] <GothAlice> (Had to do it while the info was still fresh…)
[19:41:58] <Streemo> heh, im at about a fifth of the pace
[19:42:20] <Streemo> 100 pages in a night is plenty enough for me to digest X_X
[19:42:37] <daidoji> GothAlice: do you have a photographic memory?
[19:42:48] <GothAlice> daidoji: When I choose to apply it, yes.
[19:43:06] <daidoji> GothAlice: ahhh, lucky.
[19:43:14] <GothAlice> daidoji: I've spent an obtuse amount of time training myself to forget things, actually.
[19:43:23] <daidoji> ahhh
[19:43:42] <Streemo> i try to get the big picture, cause usually i can just find the details when i need them.
[19:44:22] <GothAlice> Streemo: These days it's really about how quickly you can find information, not how much information you can retain. (The internet has had a measurable impact on the structure and function of our memories.)
[19:44:36] <Streemo> yeah exactly
[19:44:46] <Streemo> might as well use a bike
[19:44:48] <Streemo> if its around
[19:45:51] <GothAlice> … reading at 1200 WPM is kinda nuts.
[19:46:14] <Streemo> yeah thats a bit extreme, dunno how you managed that
[19:46:38] <GothAlice> Three words at a time. ^_^
[19:46:54] <GothAlice> When going that fast, yeah, I read in associated triplets.
[19:47:18] <Streemo> hmm thats a good idea actually
[19:47:37] <GothAlice> Streemo: http://www.spreeder.com for practice
[19:50:02] <Streemo> interesting
[19:51:47] <Streemo> i think the first four chapters of the docs should be enough for me
[19:53:14] <Streemo> and yeah i dont thik speed reading is for me
[19:53:47] <Streemo> i like to read logical chunks of info and then integrate them into my current understadning chunk by chunk. this is slower, but helps me in the end xD
[20:06:25] <annakarin> hi, I´m a beginner trying to learn. is it possible to pass a "if true, else" argument on .find() ? example: .find( { a: "4", b: "3} ), if true function(), else .find( {a:"3"} )
[20:15:12] <GothAlice> annakarin: Certainly: .find(condition ? {…true criteria…} : {… false criteria …}) — that'll only really work in the interactive shell, though. Each language has its own way of doing ternary statements (the name for those).
[20:15:45] <GothAlice> annakarin: However, the condidion (say "function()") will only be evaluated once, though, for the entire query.
[20:34:26] <annakarin> GothAlice: thx !
[21:50:15] <Streemo> would mongodb be effective for sites in which users can store pictures, or perhaps via a chat client send pictures/video/media files?
[21:50:31] <daidoji> Streemo: depends on how you use it
[21:52:32] <Streemo> what would be an efficient way to do that? storing links or paths to files?
[21:53:40] <ejb> How can I pass values through $group?
[21:53:46] <ejb> (unchanged)
[22:02:23] <daidoji> Streemo: I would store paths to files personally. However, I think GridFS or whatever its called allow one to store binary data in Mongo documents
[22:02:30] <daidoji> ejb: add them to the key
[22:03:31] <daidoji> $group: { _id: { ky1: "$key1" ky2: "$key2", ky3: "$key3" }, cnt: {$sum: 1}} etc...
[22:05:49] <ejb> daidoji: ok. weird
[22:10:04] <darkblue_b> I see mention of
[22:10:07] <darkblue_b> "mongo c driver to version 0.8.1."
[22:10:18] <darkblue_b> how does that relate to mongodb version numbers..
[22:11:04] <doug1> Well, this sucks.... "Error: couldn't add user: not master "
[22:11:48] <doug1> if I create the replica set first, then add the admin user, thats' fine.... until we need to add a new replica... then it'll fail
[22:11:54] <darkblue_b> Ubuntu 14.04 stock repo has v2.4.9 .. but I see refs to 2.6 elsewhere
[22:13:05] <darkblue_b> taking a look at a 10gen repo now
[22:13:58] <joannac> doug1: you can't write to a non-primary.
[22:14:14] <joannac> doug1: also, add a new replica set member, or a new replica set?
[22:14:33] <darkblue_b> 2.4-5-6-7
[22:14:39] <darkblue_b> hm messy
[22:14:50] <darkblue_b> what is what.. this isnt clear...
[22:15:11] <doug1> joannac: Sigh. So... I'm automating installation of a sharded replica set. The config has replset = foo, but since it has that option, I can't add the admin user until after there's a rs with a master...
[22:15:45] <doug1> if I create the rs first, then I have to have a way to find the master so THEN i can add the admin user...
[22:16:03] <joannac> I don't understand what you are doing
[22:16:08] <joannac> add users through the MMS UI
[22:16:18] <darkblue_b> docs.mongodb.org says.. 2.6 == current
[22:16:21] <doug1> joannac: I'm not using MMS. (?!?!)
[22:16:37] <joannac> doug1: oh sorry, i heard "automation" and assumed
[22:16:46] <doug1> joannac: I have three nodes... each with replset = foo in the config
[22:16:51] <joannac> doug1: okay... and the problem is?
[22:16:55] <doug1> joannac: not sure I'd call MMS automated. hardly
[22:17:10] <joannac> you have to find the primary, it's the only place you can issue writes
[22:17:27] <doug1> joannac: yeah, and how would I do that via the cli?
[22:17:39] <joannac> connect to one, rs.status(), find the primary
[22:17:54] <doug1> and then parse json. is there a less sucky way?
[22:18:59] <joannac> not without parsing json if you want to do it through the CLI
[22:19:05] <doug1> sigh
[22:19:12] <joannac> db.isMaster().primary
[22:19:26] <joannac> there you go
[22:19:48] <doug1> the MMS... don't suppose you know if I can set the identify of the agent via a config file?
[22:20:07] <joannac> doug1: if all your hosts have the same username/pass, you can do it in the agent config
[22:20:58] <doug1> joannac: you mean I can bring up an instance, use <insert-config-management-tool-of-choice> to actually set the identity/role (ie config server or data node) via a config file in the agent?
[22:21:12] <doug1> ... rather than via the gui?
[22:21:21] <joannac> what?
[22:21:32] <joannac> doug1: you and I are using different terminology
[22:21:35] <doug1> joannac: lets try again...
[22:21:55] <doug1> joannac: i don't want to be telling the GUI to install nodes... that's not automated...
[22:22:08] <doug1> joannac: I want to tell the agent instead of the GUI. That can be automated
[22:22:30] <joannac> doug1: define what you mean by "Agent"
[22:22:38] <doug1> joannac: The MMS agent
[22:23:27] <joannac> which one. MMS has 3 agents
[22:23:34] <doug1> it does? checking
[22:23:37] <joannac> are you talking about the automation agent?
[22:23:54] <doug1> joannac: I'm talking about the MMS agent that installs mongo software on an instance for me
[22:24:28] <joannac> that's the automation agent
[22:24:34] <doug1> oki doki...
[22:24:44] <joannac> you cannot talk to the automation agent except via the MMS UI
[22:24:49] <doug1> oh sigh
[22:25:01] <doug1> yet another failed implementation
[22:25:28] <doug1> oh well. i have a call with mongodb at 4:45pm. I'll just have to give them some feedback from the real worl
[22:26:11] <joannac> PDT?
[22:26:15] <doug1> Yes
[22:26:30] <joannac> mind PMing me who you're calling?
[22:27:10] <doug1> joannac: Not sure how... first names... Mike H
[22:27:29] <doug1> + an engineer
[22:29:20] <joannac> okay, cool
[22:29:39] <doug1> :-\
[22:29:46] <darkblue_b> any recommendation on which mondgoDB version to start a project with on Ubuntu 14.04 ?
[22:31:29] <joannac> doug1: have you used mongodb before?
[22:31:58] <joannac> oops, not doug1, darkblue_b
[22:32:02] <joannac> sorry, mishilight
[22:32:17] <darkblue_b> joannac: no
[22:32:33] <joannac> 2.6.5
[22:32:35] <joannac> latest stable
[22:32:46] <darkblue_b> I will most certainly want to use a Postgres Foreign Data Wrapper, since i use Pg every day..
[22:33:04] <darkblue_b> I see in the Pg FDW page they want "mongo c driver to version 0.8.1."
[22:33:21] <darkblue_b> that version is not at all indicated by these other numbers
[22:33:43] <darkblue_b> so, a bit confusing to a newcomer
[22:33:58] <darkblue_b> mongodb docs says ... 2.6 is "current"
[22:34:07] <darkblue_b> safe to go with 2.6 you think ?
[22:34:15] <doug1> ^ wish I hadn't used mongo... :(
[22:34:19] <joannac> yes
[22:34:23] <darkblue_b> thx
[22:35:07] <darkblue_b> doug1 I have seen expert people get bitten by automated deploy issues.. I can safely say it is not unique to mongodb
[22:35:19] <darkblue_b> "a pebble can stop the parade"
[22:35:38] <doug1> darkblue_b: you wanna come over here and tell my boss that? I'm looking pretty silly after being at it for a few months. :(
[22:36:01] <darkblue_b> thats a more complex situation than just the tech
[22:36:09] <doug1> mongodb is about the hardest thing I've had to automate in my 16 years
[22:36:17] <darkblue_b> !!
[22:36:22] <doug1> harder than... oracle, microstrategy
[22:36:40] <darkblue_b> guess what - I was just contacted to setup somebody's system !!
[22:36:44] <doug1> and microstratchy is designed for windows
[22:36:44] <darkblue_b> never done it before !
[22:36:52] <darkblue_b> thats why I am here..
[22:37:03] <darkblue_b> you know .. crawl before walk before run
[22:37:21] <darkblue_b> its a cautionary statement from you, but I am not entirely dissuaded :-/
[22:38:24] <darkblue_b> I failed horribly after a month of SQLAchemy in the past.. so I know the feeling..
[22:42:18] <doug1> ugh now I gotta learn javascript...
[22:43:45] <doug1> is there a way to print from a json file passed to the shell?
[22:44:11] <doug1> print
[22:44:40] <joannac> printjson
[22:44:48] <doug1> thanks
[22:49:15] <GothAlice> When I return, I'll see if I can dig up that automation script of mine (finally) and sanatize it to Gist for y'all.
[22:49:23] <GothAlice> Seems to be a frequent issue over the last few days I've hung out here.
[22:55:41] <darkblue_b> doug1: this is what I have sitting in /usr/local/bin/prettyJSON.py http://paste.debian.net/128223/
[22:57:03] <doug1> darkblue_b: Thanks. I was trying to write some js to work out if I'm on the master and if I am, then check if teh admin user exists, and if not, try and create it. Not simple
[22:57:25] <doug1> THis works manually... if ( db.system.users.find({user:'admin'}).count() < 1 ) { but not from a script...
[23:02:39] <doug1> Argh! "errmsg" : "not authorized on admin to execute command { count: \"system.users\", query: { user: \"admin\" }, fields: {} }",
[23:02:55] <GothAlice> You're trying to execute that remotely, aren't you?
[23:03:10] <GothAlice> (I.e. you aren't benefitting from the localhost exception to authentication.)
[23:03:29] <doug1> GothAlice: Locally. Works when I do it manually. Fails when I do it from a js file (which has conn = new Mongo(); db = conn.getDB("admin"); in it
[23:03:52] <GothAlice> doug1: How are you running that JS file?
[23:03:59] <doug1> i thought the localhost exception to auth went away after I added the admin user
[23:04:27] <GothAlice> … if you've already added the user, then you'll need to actually authenticate using it to perform that query.
[23:04:32] <doug1> Not sure... the cli is a bit confusing... one variation is mongo -u admin -p changeme test.js
[23:05:16] <doug1> ah wait this works ... mongo -u admin -p changeme admin
[23:05:21] <GothAlice> It would.
[23:05:23] <GothAlice> (conn = new Mongo(); … — that's a new, external connection, without authentication, that happens to default to 127.0.0.1. ;)
[23:05:24] <doug1> jeez this is confusing
[23:05:31] <doug1> so... mongo -u admin -p changeme admin test.js should work...
[23:05:36] <GothAlice> No.
[23:05:41] <GothAlice> the conn = new Mongo() at the top...
[23:05:53] <doug1> well it does... until the count request...
[23:07:20] <doug1> if I keep typing, like monkeys on a typewriter, I may get it eventually
[23:07:28] <GothAlice> Did you give your admin user the correct permissions? (My db-local admin users can't, for example, query the system collections in that database even though they are admins, which is intentional.)
[23:07:40] <doug1> ['dbAdminAnyDatabase', 'readWriteAnyDatabase', 'userAdminAnyDatabase', 'clusterAdmin' ]
[23:07:54] <doug1> but it works when I authenticate manually
[23:08:23] <doug1> I can run db.system.users.find({user:'admin'}).count() fine
[23:08:29] <doug1> returns 1
[23:09:05] <GothAlice> Looks like I've got the same set: ["readWriteAnyDatabase", "userAdminAnyDatabase", "dbAdminAnyDatabase", "clusterAdmin"]
[23:09:24] <doug1> how would I call the cli then?
[23:09:31] <doug1> not mongo -u admin -p changeme admin test.js ?
[23:10:01] <doug1> i don't understand why I have to pass admin on the cli when it's in the js
[23:10:26] <GothAlice> Where did you get the idea to "new Mongo" in your JS?
[23:10:41] <doug1> GothAlice here http://docs.mongodb.org/manual/tutorial/write-scripts-for-the-mongo-shell/
[23:11:27] <Boomtime> "mongo -u admin -p changeme admin test.js" <- your host name is "admin"
[23:11:28] <doug1> b = connect("localhost:27017/admin"); fails anyway with "mongo -u admin -p changeme test.js" lalalalala
[23:11:29] <Boomtime> ?
[23:11:30] <GothAlice> Ah, yes. My automation avoided using discrete .js files, instead echoing lines (via a pipe) into the shell.
[23:11:44] <doug1> my host name is admin?
[23:11:49] <doug1> no .... ...
[23:11:54] <Boomtime> "usage: mongo [options] [db address] [file names (ending in .js)]"
[23:12:11] <Boomtime> (that is the help from mongo cli)
[23:12:36] <doug1> well, this fails... "mongo -u admin -p changeme localhost"
[23:12:48] <Boomtime> fails how?
[23:12:55] <doug1> 'exception: login failed'
[23:13:00] <Boomtime> heh
[23:13:09] <Boomtime> you are authenticating against 'test'
[23:13:11] <GothAlice> "db address" is interpreted as a database name on localhost if a / is not present.
[23:13:21] <Boomtime> you need "-d admin"
[23:13:26] <GothAlice> mongo -u foo -p bar localhost/admin
[23:13:46] <Boomtime> or that will work too
[23:13:46] <doug1> ok, that works... so...
[23:13:57] <doug1> mongo -u admin -p changeme localhost/admin test.js .... correct?
[23:14:06] <Boomtime> that should work
[23:14:14] <doug1> yes. god I have no idea why
[23:14:40] <Boomtime> mongo cli lets you use the uri format or options to specify the database to authenticate against
[23:14:42] <doug1> although strangely it does print two lines
[23:14:50] <GothAlice> doug1: Localhost. Without it the default is likely actually '127.0.0.1', forcing a TCP connection instead of using a local on-disk socket. (Which is how most DBs behave.)
[23:14:50] <doug1> connecting to: localhost/admin and connecting to: localhost:27017/admin
[23:15:03] <Boomtime> the second line is your script
[23:15:07] <GothAlice> doug1: *Because you re-connect!* new Mongo()!
[23:15:11] <Boomtime> remove the script and see what it says
[23:15:15] <doug1> oic, ok
[23:15:35] <Boomtime> btw, it's rare to need "new Mongo()" in a script
[23:15:40] <GothAlice> Indeed!
[23:15:43] <Boomtime> you can just use the globals
[23:15:46] <doug1> back to this again now... ""errmsg" : "not authorized on admin to execute command { count: \"system.users\", query: { user: \"admin\" }, fields: {} }","
[23:15:50] <GothAlice> That's why I've been pointing at that line and screaming for half an hour. ;)
[23:16:07] <doug1> hm
[23:16:19] <GothAlice> doug1: You re-conect but don't *authenticate* the new connection.
[23:16:23] <GothAlice> It's not logged in after new Mongo()
[23:16:52] <doug1> hm... so.... if I remove the new line... the error goes away
[23:17:07] <GothAlice> Of course it does. Suddenly it's authenticated because you authenticated on the CLI.
[23:17:17] <doug1> jeeeesus. Ok, thanks again
[23:17:19] <GothAlice> (And are actually *using* that connection.)
[23:17:33] <doug1> gotcha
[23:18:16] <doug1> suppose the js option is by best chance of automating this
[23:18:31] <doug1> ie http://pastebin.com/d28KjwX0
[23:22:28] <GothAlice> http://cl.ly/image/1h0r1X0h2233
[23:22:29] <GothAlice> :P
[23:22:54] <doug1> yep, taht's me
[23:39:32] <morenoh149> what does 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: shpe.events.$FB_event_id_1 dup key: { : null } (MongoError)' mean?
[23:40:03] <morenoh149> I don't see that column in my mongo viewer
[23:41:08] <joannac> do you have an index in the collection shpe.events, on FB_event_id ?
[23:41:56] <joannac> what fields do you have on that collection?
[23:41:59] <morenoh149> I don't have any indexes as far as I know. and I don't have that column
[23:42:12] <morenoh149> I have FBEventId: { type: Types.Number, required: true, initial: true, unique: true },
[23:42:42] <morenoh149> don't know what $FB_event_id_1 is
[23:43:00] <joannac> go into a mongo shell , into that database, then db.events.getIndexes()
[23:43:04] <joannac> and pastebin the result
[23:47:56] <morenoh149> https://gist.github.com/c6cb5ec9cff50567797f
[23:50:34] <joannac> that seems to suggest you have an index on FB_event_id
[23:50:38] <joannac> look at the second entry
[23:51:43] <morenoh149> right. so that must be being built by keystonejs or mongoose, not me as far as I can tell
[23:53:02] <morenoh149> updated the gist with my app code if it helps
[23:54:13] <morenoh149> joannac:
[23:56:25] <joannac> morenoh149: your app code is not relevant. you have a unique index, and your inserts are causing duplicate key exceptions
[23:56:59] <joannac> figure out where the index came from, and remove it if you think it's not useful
[23:57:25] <joannac> otherwise, you could modify your app code to catching the duplicate key exception and figure out what you want to do with it
[23:58:28] <morenoh149> I think I'll just add a name field. Doesn't make sense for this model but that's what keystone is assuming in it's code. Find a name field use it as the unique id and build and index with that.