[00:00:15] <landongn> anyone at all? just looking for some kind of stepping off point to dig, so far i've come up with squat.
[00:00:51] <landongn> it doesn't seem to affect performance much if any, but it's just something I can't explain and shouldn't have that sort of behavior (from the lack of systems using any kind of NoCursorOption or lack of limits/batch_size())
[00:18:29] <landongn> can anyone tell me if closeAllDatabases is safe to run on a primary in a replica set?
[01:16:53] <jamiel> Hi guys, having a bit of an issue debugging an issue using the PHP Mongo driver and was hoping someone could help. I have a unique index on an id column which is in fact a unique numerical strins ie. "123456" and in local environment everything is dandy. On staging we have sharding enabled and the shard key is this id + created_at timestamp. Identical documents insert fine in local environment but not staging and can't seem to get get any error out the driver
[01:43:01] <giessel> dgottlieb: so you know, i had to change 2 spots--- the one in builder.h (to 512mb), and another in dur.h. (increased UncommittedBytesLimit to the max unsigned int value- 4294967295)
[01:43:23] <giessel> dgottlieb: it compiled, but i haven't ran anything. this was just on my laptop, i'll do it on my workstation tomorrow AM and see if it works
[03:06:50] <jamiel> Do I need to pass my shard key with the upsert? what is the parameter? have tried _shardKey?
[03:07:44] <jamiel> Found an error in my log now: User Assertion: 8012:can't upsert something without valid shard key
[05:03:56] <tystr> is anyone here using 10gen's monitoring service?
[08:38:51] <wereHamster> question about data modelling. I have a model, it has 'spells'. In the application spells is a map from name -> Spell (names are unique, thus a map). In mongodb I can model it as a object, or as an array of {name, spell} pairs (and use a unique index to ensure names are unique).
[08:40:41] <wereHamster> when reading posts on the internet, the community seems to lean towards an array (because mongodb has better support for operations on arrays).
[08:58:16] <wereHamster> oh, yes. should I use an array or an object? :P
[09:00:08] <NodeX> they are treated the same in terms of being able to query/index
[09:03:02] <kobold> wereHamster: I personally use arrays, because I prefer the {spells.name: "something"} query syntax over the computed key name "spells.something"
[09:04:07] <kobold> wereHamster: the latter is a bit faster if I recall correctly, though; I benchmarked the two options a couple of months ago, but the difference is very small
[09:04:52] <wereHamster> it's not about speed. The data will be loaded once from the database into the app once.
[09:40:02] <_johnny> i'm looking at http://www.mongodb.org/display/DOCS/Replica+Set+Tutorial but there's one thing i don't understand about elections: there's an example with a replication set of 3, where in failover the two remaining vote. one votes for itself, the other for the first => first is selected. how the first makes this decision (other than being the first to vote) i'm not sure, but my question: later in the docs it says running with two no
[10:04:18] <lackdrop> Does mongodb keep a query log? I want to see what queries a program is sending to it. Im tailing /usr/local/var/log/mongodb/output.log but theyre not there.
[10:05:51] <sirpengi> lackdrop: http://www.mongodb.org/display/DOCS/Database+Profiler does that help?
[10:09:50] <lackdrop> sirpengi: I saw that, but thought that it would just be profiling commands you ran through the mongo shell. Have I got that wrong? In any event, I cant even get the profile info for shell commands for some reason, though Ill keep at it.
[10:11:59] <lackdrop> sirpengi: Oh no, youre right, that does just what I need. Thanks.
[10:16:42] <cclever> Hi there, I have a problem with $slice. on one machine (2.0.6) my query returns only the sliced filed (and not the other properties of the document), and on the the other machine 2.0.4 it return the complete document which the correct part of the slices property. Any idea, how that can be?
[10:19:55] <NodeX> can you pastebin the query and the results from both machines
[10:25:29] <cclever> please find the query on top, the the result from machine A and from Line 2383 the result from machine B
[10:29:11] <NodeX> I can;t even read the data properly - it's very long lol
[10:30:12] <lackdrop> Em, does update not support inserting a document if it does not exist or $inc a counter if it does? Im getting "Modifiers and non-modifiers cannot be mixed"...
[10:30:58] <cclever> NodeX, correct: It is long. let me paste you a better example
[10:31:27] <NodeX> you can $inc lackdrop but not on the upsert field
[10:35:18] <lackdrop> NodeX: But this isnt valid? db.people.update( { name:"Joe" }, { x: 1, $inc: { y:1 } }, true ), meaning if Joe doesnt exist create him with x and y = 1, otherwise just inc y.
[10:35:57] <lackdrop> Oh, I suppose I could just $set x and that would work?
[10:38:39] <cclever> NodeX: better example http://pastebin.com/DuKEPRPh
[11:03:42] <cclever> nodex: yes, it is. the documentation says "Filtering with $slice does not affect other fields inclusion/exclusion. It only applies within the array being sliced."
[11:53:20] <pnh7> Hi... can you give me more information about benchrun() ? I'm not quit sure about how to analyze the output of benchrun . I couldn't find more information on this except http://www.mongodb.org/display/DOCS/JS+Benchmarking+Harness
[12:07:48] <lackjax> In an update, suppose I use the criteria { "_id": 1234, "somefield1": 1, ..., "somefieldn": n } where the somefields arent indexed but the _id obviously is. Thats not going have any measurable performance impact is it? I mean, the _id will ensure that the record to be updated is found in O(1) right?
[12:16:28] <wereHamster> lackjax: why to you include the other fields? The id is guarnateed to be unique
[12:31:01] <souza> Hello guys, i'm with a question, if i don't have a field in a collection and i use $set in a non existing field, this will be created?
[12:43:34] <kali> souza: i just think i should have warned you against discovering C and mongo at the same time, and advise to start studying mongo with an easier to use language... maybe it's not too late
[12:46:43] <souza> kali: I'd no choice, i'd to learn C and MongoDB, at same time to use at research project that i'm working.
[12:56:00] <algernon> there's nothing wrong with learning mongo and C at the same time. the problem is that the official driver doesn't make that easy :P
[12:56:46] <augustl> what are typical strategies for adding maintaining indices on a db? I'm thinking I'll write a script that applies the indices to a given db.
[13:03:48] <kali> augustl: here we add a static method on the model classes, and we have a scriptlet that iterate over the classes and run the method if it exists, which is plugged in the deploy scripts
[13:33:06] <icedstitch> ok... now, for another question.
[13:33:52] <icedstitch> How does one find the release versions for .deb files for ubuntu? Is there a webpage for it?
[13:42:08] <jY> icedstitch: version is in the filename
[13:42:29] <jY> for the mongo apt repo.. it will always be the latest stable
[13:44:17] <icedstitch> jY: awesome. What kind of turnaround timeframe should I expect the .deb version to be ready when the 2.2 stable is released, same day, 3 days, or two+ weeks?
[13:47:10] <icedstitch> i'm presuming the tar.gz will be the first to be released with the package managers coming later.
[13:47:18] <boll> I can't seem to figure out to what extent mongod can take advantage of multiple cores and hyperthreading. Is there any official documentation available on the subject?
[13:47:55] <jY> icedstitch: my experience is same time
[13:50:59] <icedstitch> boll: it's the same issues mysql faces too.
[13:51:15] <icedstitch> i know litte of it, except via blog post i read on
[13:51:24] <icedstitch> that's all i can help with....
[13:51:39] <boll> I never really did any serious work on mysql, so I really have no idea how it works :)
[13:51:46] <icedstitch> give me a moment or two and I "should" be able to post you the link
[13:52:10] <icedstitch> i have, and I never ran into the issues as I had with mongodb on my x64 linux archs
[13:52:21] <lackjax> Can this be done? I want to either insert { "_id": "1", name: "Bobo", "counter": 100 } or update the counter by 1 if it exists. My best effort is as follows, but doesnt work (insert counter 101). db.tests.update({ "_id": "1", "counter": 100 }, { "$set": { "name": "Bobo" }, "$inc": { "counter": 1 } }, true)
[13:54:41] <icedstitch> boll: http://blog.manjusri.org/2011/03/17/administrating-mongodb/ read this as this is pretty informative imo(i do wish to be corrected by the "experts" in this room if i should be), a little down the page you'll read about numactl
[14:05:59] <NodeX> that will set or increment "number" by/to 42 for the document bar = 1
[14:07:53] <lackjax> NodeX: Right. But I need to set to one number (arbitrary) on inserts but increment by another number (1) on updates. Sounds like I need the stuff talked about in 340. Theres no way to do that at the moment?
[14:13:52] <lackjax> It seems odd to me that if you put counter: 100 in the criteria and $inc => counter: 1 in the update you get 101. Running the modifiers on something thats just been created I mean seems strange to me, but I presume thats intended behaviour?
[14:24:42] <spillere> can anyone help me with a query. My db has for example, {username: spillere, photos [{photo: name, url:url},{photo: name2, url:url2}, ]
[14:25:05] <spillere> if I have the name2, how can I get all the information on that {}
[14:43:41] <spillere> NodeX thanks, but I can't quite understand all
[14:44:25] <spillere> when I do db.dataz.find({'username':'dansku'},{'photos.filename':'zc2gjq0vfi.jpg'}), it get all photos.filenames, and not just the JPG i want
[14:46:57] <spillere> in photos.filename, how do I slice all inside {}
[14:47:29] <augustl> will ensureIndex({name: 1, userId: 1}, {unique: true}) make the name uniqueness constraint scoped to userId? So that the uniqueness only applies to other documents with the same userId?
[14:49:42] <NodeX> augustl : from what I remember it will make name and userId unqiue
[14:55:14] <algernon> DETROIT_POWER: only as much as a duck is a hawk.
[14:55:16] <augustl> DETROIT_POWER: sounds like something you should do in your application. I don't see why a document database should have this in its core
[14:55:39] <NodeX> this is an Appside problem which most people use some sort of ORM to deal with
[14:55:58] <spillere> NodeX I understand, thanks! :)
[14:56:04] <augustl> without knowing much about internals, perhaps write/read performance could improve by knowing beforehand the size of the documents?
[14:56:25] <DETROIT_POWER> Well, can I still do joins?
[14:56:54] <augustl> I'm not aware of any document databases with joins
[14:57:17] <DETROIT_POWER> NodeX: Where can I read about embedded documents?
[14:57:28] <augustl> DETROIT_POWER: the docs have some pages on schema design
[14:57:41] <NodeX> as I said - forget RDBMS and everything you know about it becuase it will hinder your development with mongo, you cannot think of it in terms of any relational anything
[14:57:56] <NodeX> your head will hurt, you have to re-think how you store / access data
[14:58:23] <DETROIT_POWER> Mr. Codd's DB2 ruined me
[14:58:25] <NodeX> DETROIT_POWER : Do you know much about Json ?
[14:58:43] <DETROIT_POWER> NodeX: I am familiar with it, yes.
[14:59:01] <NodeX> you can think of Mongo as searchable Json really
[15:02:00] <NodeX> it will cluster and shard chunks of data in a nice scalable and fast way
[15:02:16] <modcure> NodeX, db.posts.find({"photos.photo":"name2"},{ username: 0, _id: 0 }, {photos:{$slice:[1,1]}}); <-- any reason why the $slice does not work in this case?
[15:06:46] <DETROIT_POWER> I am using the shell now and I just locked it up when I tried to get the Global Object with: var m = (function() { return this; })()
[15:06:52] <giessel> i'm about this close || to finishing my patch
[15:07:14] <DETROIT_POWER> Is Spidermonkey really good now?
[15:07:28] <DETROIT_POWER> Is the shell spidermonkey to?
[15:09:39] <spillere> like have a collection for users a collection for pictures?
[15:09:50] <DETROIT_POWER> NodeX: Looks like it is. I can run jQuery from the console :)
[15:10:15] <spillere> NodeX so everypicture have it's own ID then?
[15:10:22] <NodeX> DETROIT_POWER : I moved from RDBMS about 2 ish years ago and have not found one thing (apart from aggregation) that I cannot do with Mongo, and its speed is extremely fast and it's mundo scalable
[15:10:27] <NodeX> I cannot recommend it more highly
[15:10:40] <NodeX> spillere : that's what I did and it worked better for me
[15:11:45] <DETROIT_POWER> NodeX: How do you go about ensuring you don't suffer data loss? My boss thinks Postgresql is much safer for storing our CRM's data.
[15:12:07] <NodeX> data loss in transit or failures
[15:13:13] <NodeX> for transit there are options to make sure the data is "fsynced" to disk before the function returns an "ok", for redundancy there is what's called "replicaSets" which are master/slave
[15:17:09] <DETROIT_POWER> NodeX: Why do you prefer Mongo over SQL? And do you use an ORM?
[15:17:45] <NodeX> I prefer the way it stores / handles data, I dont like the quirks of *SQL, I dont like the lack of security of SQL, I dont use an ORM
[15:17:46] <dgottlieb> giessel: I don't know anything about those files, but I am curious what you're looking at in there
[15:17:54] <NodeX> I built my own wrapper to a driver as a helper
[15:25:14] <multiHYP> i mean is anyone making use of mongodb's date facilities or date is done in the app layer?
[15:25:17] <augustl> DETROIT_POWER: typically, full text engines are horrible at searching for "first name NOT contains 'foo'" for example, due to the way the indexing works
[15:25:38] <dgottlieb> giessel: awesome! I saw the half gig BSON size and just thought there's no way this is going to work if the data is actually that big
[15:25:47] <NodeX> I realised after alot of messing about that it's far better to let SOLR do my seraching
[15:26:03] <multiHYP> especially with custom formats.
[15:26:18] <giessel> dgottlieb: generally i think our largest objects will be around 150mb. but it remains to be seen.
[15:26:18] <kali> multiHYP: custom formats ? that's not a model thing
[15:26:19] <dgottlieb> giessel: I think 168MB is pretty much just as large as half a gig :)
[15:26:33] <NodeX> give or take a few hundred meg lol
[15:26:41] <DETROIT_POWER> So, it's just really slow to find a substring in Mongodb compared to SQL where you have LIKE / ILIKE?
[15:26:44] <multiHYP> kali: so how do you use mongodb's date?
[15:26:48] <giessel> dgottlieb: we're a really unique case- we're not pushing stuff over the web, this will be for local storage and analysis. our data doesn't break up well
[15:26:53] <NodeX> anything that's not prefixed yes
[15:26:55] <augustl> DETROIT_POWER: no, LIKE is similar (ish)
[15:27:03] <augustl> DETROIT_POWER: solr is much faster than LIKE in SQL
[15:27:03] <kali> multiHYP: to store date, in a canonical form
[15:27:05] <dgottlieb> multiHYP: most drivers will convert their driver specific date into mongo's date type
[15:27:07] <NodeX> as I say it has regex .... but Mongo is not designed for this kind of thing
[15:27:08] <kali> multiHYP: the rest is application land
[15:27:13] <giessel> dgottlieb: it's crazy because every time i've logged in here i see people complaining about the 16mb limit heh
[15:27:28] <multiHYP> kali: can you give a query example of that?
[15:27:30] <dgottlieb> multiHYP: sorry, most drivers will convert their languages standard date objects into mongo dates
[15:27:50] <NodeX> giessel : doesn't gridfs split it for you - I thought that was the point of it
[15:28:26] <algernon> not only is SOLR much faster than LIKE, it is also more powerful.
[15:30:35] <augustl> DETROIT_POWER: that depends on what you index in solr, you can have it return the indexed data, or the ids and then do a second query to mongo and get actual data from there
[15:30:36] <NodeX> DETROIT_POWER : no, mongo would be your datastore and db for main seraching, if you have things you need to use lucene on you revert to solr
[15:30:38] <kali> multiHYP: storing dates as string is usually asking for trouble :)
[15:30:46] <giessel> had to build a meta layer of code to handle dealing with meta data and such
[15:30:51] <NodeX> for example. I index my "jobs" collection, bu not my "users"
[15:32:22] <kali> multiHYP: year month and day is equivalent to year month and day at midnight..
[15:32:22] <giessel> NodeX: it does work, it's just inconvenient and takes a lot of consideration to get it right
[15:32:27] <remonvv> No it isn't. This discussion comes by every other day. What date you're storing and how it's visualized is not related. If you want to store a year/month/day only simply store a clamped date and visualize just the year, month and day
[15:33:04] <giessel> NodeX: we had to search through our dictionary, looking for data of a given type, then encode those data, insert them in the gridfs and replace them in our dictionary with a gridfs path
[15:33:17] <DETROIT_POWER> NodeX: So if I wanted to do a full text search *and* handle another criterion like a price range, I would take the document IDs output by SOLR and supply them to Mongodb along with my price range condition?
[15:33:41] <multiHYP> i know, but its bigger in size and unnecessarily verbose
[15:33:54] <remonvv> you either store a date or a non-date value that happens to look like part of a date.
[15:34:05] <multiHYP> its like saving too much useless repetitive date information.
[15:34:08] <NodeX> DETROIT_POWER : one of my apps is pure SOLR for searching, but it's because of my need for facets and FTM. If I didn't need it I would do it all in Mongo as it's less data to handle
[15:34:31] <multiHYP> remonvv: so thats what I'm asking. is it possible to have a custom date stored in mongodb?
[15:34:35] <remonvv> multiHYP. there are VERY few valid use cases for storing just the year, month and day that are not related to visualisation
[15:34:58] <multiHYP> again, i don't care about visualisation either.
[15:35:04] <remonvv> yes, convert it to something you're more comfortable with and store that. It just wont be a date (and date specific functionality will therefor not work)
[15:35:05] <NodeX> DETROIT_POWER : in SOLR you can store/index as much as you want... I spit out alot of data to alot of users so I prefer to not hit mongo everytime I need to do that and I store enough for a "quick view" of a job for example
[15:35:53] <multiHYP> as binary blob? how would that be in java?
[15:35:55] <remonvv> no it doesn't. All mongo range checking is binary based so if you store it binary correctly (e.g. store year, month, day in that order) it'll work
[15:36:05] <remonvv> in Java. Hm, think it has a wrapper type
[15:36:09] <DETROIT_POWER> I see. But if you have a complex query combined with a full text search, you'd still need to hit up Mongo? For example, searching within a date range.
[15:37:00] <remonvv> also sortable but much bigger than just storing a date
[15:37:03] <NodeX> DETROIT_POWER : one thing I will recommend you bear in mind with Mongo is that everything is typecasted. in SQL you can do SELECT foo FROM bar WHERE foo=1; .... SELECT foo FROM bar WHERE foo='1' - both being the same (in mysql atleast)
[15:38:54] <multiHYP> how big would the string "20120627" be?
[15:38:55] <NodeX> you can do upserts - which is one of the best things ever invented
[15:39:08] <multiHYP> well key, value: {"date":"20120627"}
[15:39:22] <kali> 4 bytes for size, 8 for data, 1 for final 0 -> 13, I win
[15:39:29] <remonvv> multiHYP, 64 bits. Smallest amount of bits to get year/month/day in is probably years since 1970 plus month plus day so say 7 + 5 + 6 = 18 bits
[15:42:20] <multiHYP> kali: its not binary: its {"date":"20120627"} as opposed to ISODate("2012-06-27T15:39:24.886Z")
[15:42:23] <remonvv> long story short, unless you're storing 100 dates per document and that 5 bytes header is less of an issue i'd just use dates
[15:42:40] <multiHYP> now which one looks cleaner?
[15:42:42] <kali> multiHYP: your date as a string consumes 13 bytes.
[15:42:47] <NodeX> DETROIT_POWER: some of my solr stats.. 250,000 (ish) document database, full geo spatial bounding box search with 4 varying keywords search time is 19ms
[15:58:58] <multiHYP> I know what I will do, in db i use ISODate for everything, but where I need the date, I label it as "date" and where the time matters, I put it under "time" label.
[15:59:19] <NodeX> I normaly break the data into an object
[16:20:11] <southbay> Can anyone reccomend an article which discusses the syntax for reaching into an array of documents to pull out specific fields in that document which sits in the array? I'm not sure if I must use a forloop or if there is dot notation that is possible.
[16:26:13] <multiHYP> southbay: you need to fetch the entire array and iterate though it.
[16:33:00] <southbay> Yeah, I was hoping to write some test code through the mongo cli but looks like I'll have to use LINQ to grab the info. : \. Pardon the noob request, I'm looking to take the turn away from MS SQL and use Mongo instead.
[16:40:03] <southbay> multiHyp: I realized my syntax was wrong. You can use dot notation to reference fields within documents which reside in an array structure.
[16:51:56] <igor47> so, i'm building a test environment; can i have a replica set with just one machine in it?
[16:55:02] <igor47> my one node became primary, but php is upset and says "couldn't determine master"
[17:21:17] <southbay> igor47: I have not delved into the replication piece of mongo yet, but is there any issue with the port that's bound or IP that's bound?
[17:21:35] <southbay> binding conflict is what I'm getting at.
[17:31:02] <_johnny> is it possible to use geospatial to check an input coordinate against a polygon? so instead of looking through the db to find items with location within the poly, rather check if a coordinate is within -- or should i rather implement a algorithm specifically to that?
[17:31:27] <_johnny> -- that's essentially what i'm trying to avoid. i'd like to use mongo's engine for it :)
[18:20:19] <jenner> guys, I'm getting lots of these http://pastie.org/private/okokf2vs2clckg3fj10qwg does anyone know what they mean?
[18:39:49] <mids> Loch_: you cant just return an inner object
[18:40:56] <Loch_> Alright, so I can do the findOne on the users, then return the whole object, but then I cannot use the MongoDB methods to find a particular game in that object, correct?
[18:57:58] <tubbo> i'm looking to convert some data OFF of mongo and onto PostgreSQL. i want to do this by exporting CSV and importing those CSV files into PG directly with the COPY command
[19:00:58] <tubbo> mids: yes, they map to relations in the database
[19:01:41] <tubbo> mids: basically, in order to preserve associations i need to convert all BSON objectids into the numerical ID assigned when the row is inserted into postgres
[19:28:27] <erick2red> mids, what I want to know is some impressions of the people who use it, I'm considering use it for some project, and I want to know first what you people think of it
[19:28:44] <mids> erick2red: you need to be more specific.
[19:28:54] <tubbo> erick2red: you're asking #mongodb what they think of mongodb? that sounds COMPLETELY unbiased lol
[19:30:34] <tubbo> mids: i'm actually running this through a Rake task so that's why it was escaping :)
[19:31:14] <erick2red> tubbo, I know the answers can be biased, but I can manage that, actually the impressions I heard before are biasd as well in the mood of "Oracle DB FTW" and stuff
[19:31:19] <mids> tubbo: if you need to turn these into numeric IDs anyway, you can do that in your Ruby code in one pass
[19:31:32] <mids> no need to use obscure tools like sed :)
[19:32:17] <mids> and storing 400mb files in Oracle doesn't sound heavy?
[19:32:19] <tubbo> mids: sorta. this is safer because i can create temp tables. it's more annoying to do that totally within ruby
[19:32:51] <tubbo> my idea is to extract everything to temp tables, then create the new schema, then i just import the relevant data, converting the shit i need to convert along the way
[19:32:59] <tubbo> but this way nothing gets lost accidentally, and we can always roll back
[19:37:57] <bosky101> hi, my mongo shell queries on a replicaset are returning in sub-second, while the native mongo driver is taking 10 minutes on a server with no load ( the final number of docs is > 1 million rows ). any idea what i can do to tweak this or why its taking time ?
[19:56:09] <planrich> so the whole document has in id when i insert, but the measurements do not have an id. basically i would like to identify each of my measurements by some kind of id. is the index a good choice?
[19:59:08] <planrich> might be helpful: http://www.mongodb.org/display/DOCS/Full+Text+Search+in+Mongo
[19:59:43] <mids> planrich: what do you need to do with the measurements?
[20:00:18] <planrich> mids: delete/update the values, so i need some kind of identification of each measurement
[20:02:39] <planrich> and of course i can use the index to find the measurement again (i suppose mongodb will not reorder arrays until i tell it to) but my question is if the index of an array is a good idea, or should i try to generate an object id for each of my measurements?
[20:59:44] <jasonframe> hey guys, have a question about replica set setup. We have a set with two servers running primary / secondary and an aribter running on an app server
[20:59:49] <jasonframe> we are going to add another app server under a load balancer, what is the best practice with the arbiter
[21:04:58] <linsys> jasonframe: you want an odd number of votes so currently you have 3 votes, don't add another arbiter
[21:06:10] <jasonframe> I was thinking keeping the single arbiter on app server 1, and none on app 2, just needed second opinion for sanity
[21:06:42] <linsys> jasonframe: get rid of the arbiter and deploy 3rd mongodb replica set member...
[21:06:48] <linsys> that is really the only other option as I see it
[21:15:30] <tystr> is anyone here using 10gen's monitoring service ?
[21:20:27] <BurtyB> tystr, I am and I imagine a lot of others are too
[21:21:55] <tystr> what's the best practice for running the agent? should I run it on each node in my replica set?
[21:47:39] <godawful> I've got a degraded replica set: 2 nodes up, one node down. I'm replacing the borked node with a new one from backups.
[21:48:08] <godawful> Just wondering about the best procedure to add it without disturbing the surviving nodes: Remove the dead one first, then add? Or just add it?
[21:48:34] <godawful> I'm worried that the primary's going to unelect itself if the voting changes or something...
[21:49:08] <ranman> godawful: just add it then remove the old one is my suggestion
[21:49:21] <ranman> or you can add it as the old one if you have it on ec2 or something
[23:10:10] <benop> anyone able to help me with a weird connection error?
[23:10:34] <benop> MongoDB.Driver.MongoConnectionException: Unable to connect to server: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
[23:10:44] <benop> the server in question has only 24 open connections
[23:12:25] <dstorrs> so, with update I can use $inc to atomically update one element, even an array item. Is there a way to say "here's a array of values, atomically add them to the corresponding array values at 'key' ?
[23:13:53] <dstorrs> so, if the docs is { views_by_day : [ 0, 1, 2, 3 ] } and I do add the array '[3, 4, 5, 6]', I should be left with a doc that says { views_by_day : [ 3, 5, 6, 9 ] }
[23:34:52] <dstorrs> with update(), how do the '$set' and '$inc' operators interact with upsert : 1 ?
[23:35:29] <dstorrs> if the record does not exist, does it get init'd, then '$inc' treats unset values as 0 ?
[23:36:03] <dstorrs> what if the record exists, but has a slightly different structure than what you're updating? (keys missing / extra keys added)
[23:40:40] <ferrouswheel> for your previous question you might be able to use the positional modifier: http://www.mongodb.org/display/DOCS/Updating#Updating-The%24positionaloperator
[23:40:52] <ferrouswheel> but I don't know how $inc and the $ positional interact
[23:41:19] <ferrouswheel> upserts for inc assume 0
[23:41:58] <dstorrs> ferrouswheel: *cough* "The positional operator cannot be combined with an upsert since it requires a matching array element. If your update results in an insert then the "$" will literally be used as the field name."
[23:42:05] <dstorrs> from the link you just pasted. :>
[23:42:42] <ferrouswheel> okay, I was referring to your previous question where you didn't specify you needed an upsert.
[23:44:07] <russfrank> is there a better way to extract frequency information on a particular field than some group query?
[23:46:19] <halcyon918> is there a way (or even a need) to use the repset name when connecting to mongo via the java driver? (our DBA told me "from the app, you can use repset name" but I don't see anything to that effect in the online docs)
[23:51:27] <dstorrs> ferrouswheel: ah, ok. my bad.