[00:15:56] <livinded> joannac: cool. I disabled auth and it's working fine for the second node now. But the arbiter is still unknown and the last heartbeat is "still initializing"
[00:19:02] <joannac> How did you start the arbiter? How did you add it?
[00:20:07] <livinded> I started it like any other mongo instance and then did rs.addArb(<host>)
[02:22:11] <favadi> Yes. MongoDB keeps all of the most recently used data in RAM. If you have created indexes for your queries and your working data set fits in RAM, MongoDB serves all queries from memory.
[02:22:41] <favadi> So I wonder if my data set is small enough, all data files will be serve from RAM, right?
[02:23:25] <favadi> Then there's will be no point to run mongodb in tmpfs like http://edgystuff.tumblr.com/post/49304254688/how-to-use-mongodb-as-a-pure-in-memory-db-redis-style
[02:52:37] <safani> I am trying to sort documents by an embedded document value
[02:52:51] <safani> for example {title:"title", name: "name", embed: [{id:4,sort:1},{id:5,sort:2},{id:6,sort:3},{id:7,sort:4}]} I want to sort based on embed sort numbe
[03:11:22] <jackh> Hi, Derick, joannac, mstearn, Number6: I'm from IBM, since IBM Power Servers are big endian, we made some modifications to make mongodb work on our server. Now since lots of customers want to make our trial sourcecode supported by mongodb community. So how to make it merge into mongodb mainstream?
[03:14:33] <safani> i want to be able to return products in a category and then sort them based on the categories.sort number determined by the categories.id
[03:20:22] <safani> joannac there are several subdocs in the categories key
[03:20:37] <safani> i want to sort by only one each time
[03:21:07] <safani> thats how you set a value so i figured..
[03:21:12] <joannac> When I said actual documents, I mean actual documents. Not a schema
[03:22:39] <safani> heres a sample document http://pastebin.com/WZtDEWct
[03:23:08] <joannac> And what do you want the result to be?
[03:23:41] <safani> i cannot sort on a specific categories.id using categories.sort
[03:24:15] <safani> I want to go into a specific category and pass that id as the parent. I then find the children of the category and all products contained
[03:25:10] <safani> i want to sort the products based on one of the subdocuments in "categories" the one i will sort on is the one that matches the categories.id with the current parent category id i am in
[09:51:47] <NodeX> gotta investigate a mongodb problem :/
[09:51:56] <sinclair|work> NodeX: i was banned too
[09:52:11] <NodeX> a weird installation which won't start properly is somehow causing a segmentaion fault in php
[09:52:13] <sinclair|work> NodeX: i didn't even do anything
[09:52:49] <NodeX> sinclair|work : I told an OP to die slowly because she PM'd me without asking after going mad at me for telling someone to not PM me without asking becuase it's rude
[10:38:30] <NodeX> when IE get more on board with standards they will get supported
[10:38:52] <NodeX> if more developers acted this way then they would have no choice but to be more standardised
[10:39:23] <NodeX> + it's not like the old days where it was a choice between IE and Firefox - there are now a lot more choice and reason to NOT use IE at all
[10:41:12] <Zelest> just take something as "simple" as a regular AJAX call in IE..
[10:41:21] <Zelest> you have to hack your JS to support IE :S
[10:41:40] <Zelest> and if the JS fails, IE just say "error" and gives you no debug information what so ever.. -_-
[10:43:17] <NodeX> Microsoft have a certain pre 2008 arrogance about them where they think they rule the software world
[10:43:53] <NodeX> it shows with Winblows 8 where they think they can just change core things (that make winblows usable) and people -must- put up with it
[10:46:28] <Zelest> well, to be fair, so did Apple do long before they became mainstream :)
[10:46:43] <Zelest> but then again, people liked it and embraced it.. that's not the case for most Microsoft changes.
[10:46:54] <Zelest> they just change things for the sake of being different.. not improving their products.
[10:47:16] <Zelest> Windows is good for gaming, that's it.
[10:47:28] <Zelest> If you're not a gamer, you have no reason what so ever to run Windows.
[11:13:52] <sooraj> anyone knows how to recover mongodb data if repair fails ?
[13:06:10] <angasulino> http://pastebin.ca/2472476 <-- is this the right way to do a @Document if I want to access the id?
[13:07:01] <cheeser> is which part right? the getter?
[13:09:24] <angasulino> cheeser, getter and setter for the id
[13:09:32] <angasulino> I'm getting the data right, except for the id
[13:09:57] <cheeser> that's how you do it with morphia (with which i'm familiar). i don't see how spring-data would be any different
[13:10:30] <angasulino> cheeser, thanks, I'll try a couple more things, I just wanted to check if there was something obviously wrong
[13:11:13] <cheeser> not that I can see. you might have a timing issue in terms of when you're trying to get it vs when spring data sets that.
[13:59:04] <angasulino> solved, I just avoided specifying the collection, since I wasn't specifying it quite right everywhere, and the default is fine.
[14:13:24] <eldub> Is there a script that I can run to test my replicaset? something that will create/write to a db and I can test node failures
[14:13:41] <ebragg> I'm looking for a way to query an Array of objects for any of the sub-fields. Is there a way to have MongoDB flatten the Array into a string via something to the effect of: array.toString()
[14:15:01] <ebragg> I know I can specify {"array.field":{$regex:'.*searchterm.*'}} and get that to work for a specific field, but I'm looking to do the same thing on anything contained within the array
[14:48:50] <tute666> hi. does anyone know what happened to analytica, the BI solution based on mongo?
[14:55:06] <dbasaurus> Greetings… I was wondering if anyone knew how sharding in MongoDB effects indexes. For example, when a chunk is moved to a new server does it have to rebuild the entire index?
[18:22:04] <bjori> eldub: then they ping the servers regularly and maintain a full overview over the replicaset status
[18:22:20] <bjori> when the primary becomes unavailable the driver will detect that and kill its connection to it
[18:22:37] <bjori> then the replicaset memebers themselves will call for an election and vote on a new primary
[18:22:51] <bjori> which the driver will then soon discover, and route all writes to that server
[18:23:26] <bjori> eldub: do note; during the election process (15-60 seconds window, typically) no writes will be allowed, and if you do try to write you'll get an exception thrown
[18:23:44] <bjori> there is no "write-queue during election", your application needs to handle that
[18:25:34] <eldub> bjori is this something that's setup on the replica set side or from the client
[18:26:17] <eldub> I know the replica set votes on a new primary, just wondering if there's any configuration that needs to be done for the drivers to ask 'who is primary'
[18:26:44] <cheeser> the drivers should discover master automatically given a list of servers to connect o.
[18:28:55] <bjori> eldub: when you want to get down and dirty, there are some intervals and timeouts that you can fine tune.. but in general there is no need
[18:35:22] <astropirate> I have a doc structure like this: { subDoc: {rating: 55 } } I have a collection of documents with this structure. how can I sort by the "subDoc.rating" field?
[19:27:57] <dbasaurus> I am looking at the change log and I noticed that the balance is spending 163,500 ms trying to contact the two shard. What could be causing this? Also, I noticed that the balancer is spending 165,000 ms replaying events that happened to these documents during the copy. Do these times seem reasonable?
[19:45:20] <edude03> Hey guys, I accidentally inserted a bunch of documents with the ID as a string instead of an ObjectId, is there a way to fix this?
[19:45:42] <cheeser> are those IDs referenced anywhere?
[19:46:24] <edude03> Yeah I have something like Posts hasmany (non embedded) Comments, and the comment IDs are screwed up
[19:46:44] <cheeser> so other documents are references those ID values, then.
[19:50:08] <cheeser> well, fixing those documents is easy enough.
[19:50:26] <cheeser> you'll have to delete those with the bum IDs and rewrite them using ObjectIDs
[20:08:28] <ccmonster> how do I reverse the query result set order, so that I am getting the last items put into the db back ... first?
[20:20:51] <joannac> If you're using normal ObjectIDs, they encode timestamp, so you can sort.({_id:-1})
[20:22:22] <OftenBob> I have a bit of a mongo shell vs. line protocol question: I'm trying to perform a map/reduce across "invoice" records and automatically include details from other collections in the result.
[20:23:08] <cheeser> m/r works on one collection only
[20:23:24] <OftenBob> I.e. for one value I want the title, reference ID, and actual ID of a "Job" (db.Jobs.findOne({_id: value.j}, {_id: 1, t: 1, r: 1})) to be returned with the invoice. Works when map/reducing from the interactive shell, not so much over pymongo.
[20:24:19] <OftenBob> So, no way to do that? Wanted to save some roundtrips. :(
[20:30:09] <OftenBob> Ah well, time for three additional roundtrips per map/reduced result. :/
[20:32:01] <flatr0ze> a QQ: I'm storing files in Mongo... the binary blob's being sent to the server via HTML5 file uploading mechanism. I'm then sending it back to the clients who want to see / download that file, they encode it to base64 using window.btoa (or a fallback js function for old browsers)... What would be better resource-wise: convert to base64 _before_ sending/storing the file or each time the client gets the file? I really don't
[20:32:44] <flatr0ze> Oh, and I'm using data-URI's for both images and download links.
[20:39:25] <OftenBob> If the data is most often used in a base64 format, storing it that way may be beneficial if processing power is less abundant than storage space.
[20:40:18] <OftenBob> If you're serving your BLOB data directly over HTTP (i.e. as a result of a user clicking a link or an AJAX call) than you can store it as gzip-compressed base64-encoded and serve the gzip-compressed data directly.
[20:40:35] <OftenBob> (Which saves the additional overhead of your front-end web server dynamically recompressing the same thing over and over.)
[20:41:09] <cheeser> you just have to make sure the webserver doesn't try to regzip it.
[20:41:15] <OftenBob> Line compression will take more processing power than base64 anyway. ;P
[20:42:35] <OftenBob> You'll also need to handle the possibility of 'gzip' not being in the requests's Accept-Encoding and manually uncompressing it before delivery for browsers that suck that badly.
[21:13:53] <MarkAValdez> Anyone know why you cannot create a scope over a field like SKU using mongoid that can search over purley numeric data values? It works if atleast one char is a letter even if you define the field to be explicitly a String?
[21:37:30] <Ramone> hey all? I've got a big suite of automated tests for my app and they hit mongo pretty hard? my connection drops part way through, and I'm trying to figure out why? can anyone give me an idea on how I might further debug?
[21:38:10] <Ramone> weirdly they run fine on macos but not ubuntu? but maybe that's just a coincident, because I only tested with 2 of each
[21:38:31] <cheeser> run mongostat and see what's going on up to the point of disconnection
[23:42:44] <Ramone> hey can anyone tell me why my # of connections would top out at ~800 and then drop to 1? ulimit is set to 25000? anything else to check?