PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 29th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:01] <livinded> I used whatever the init script did
[00:03:11] <joannac> Okay, back a step. Failed authentication when connecting from primary to secondary?
[00:04:06] <livinded> joannac: yes
[00:06:41] <livinded> joannac: and yes, they are all started with auth. It's in the config file
[00:10:08] <joannac> Then you need a keyFIle
[00:10:20] <joannac> http://docs.mongodb.org/manual/core/inter-process-authentication/#replica-set-security
[00:15:56] <livinded> joannac: cool. I disabled auth and it's working fine for the second node now. But the arbiter is still unknown and the last heartbeat is "still initializing"
[00:19:02] <joannac> How did you start the arbiter? How did you add it?
[00:20:07] <livinded> I started it like any other mongo instance and then did rs.addArb(<host>)
[00:20:12] <livinded> from the primary
[00:21:29] <livinded> joannac: I basically followed http://docs.mongodb.org/manual/tutorial/add-replica-set-arbiter/
[00:23:29] <joannac> What does the mongod logs say about it?
[00:24:45] <livinded> joannac: I don't see anything in there
[00:26:42] <livinded> joannac: ah, fixed it. Networking issue
[00:26:43] <livinded> thanks
[00:31:18] <joannac> cool
[02:18:30] <LuckySMack> how does mongo handle it when a document has many records and pulling records is more than the max allowed memory?
[02:22:10] <favadi> Hi, from faq
[02:22:11] <favadi> Yes. MongoDB keeps all of the most recently used data in RAM. If you have created indexes for your queries and your working data set fits in RAM, MongoDB serves all queries from memory.
[02:22:41] <favadi> So I wonder if my data set is small enough, all data files will be serve from RAM, right?
[02:23:25] <favadi> Then there's will be no point to run mongodb in tmpfs like http://edgystuff.tumblr.com/post/49304254688/how-to-use-mongodb-as-a-pure-in-memory-db-redis-style
[02:52:17] <safani> hello all
[02:52:26] <safani> i have a sorting question
[02:52:37] <safani> I am trying to sort documents by an embedded document value
[02:52:51] <safani> for example {title:"title", name: "name", embed: [{id:4,sort:1},{id:5,sort:2},{id:6,sort:3},{id:7,sort:4}]} I want to sort based on embed sort numbe
[02:53:44] <safani> anyone home
[02:58:24] <safani> someoone anyone just a word on this
[02:58:41] <safani> my head is hurting
[03:08:50] <joannac> Um, just sort on it?
[03:09:00] <safani> it doesn't work
[03:09:06] <joannac> db.coll.find().sort({"embed.sort":1})
[03:09:23] <joannac> Wait
[03:09:30] <joannac> What do you want to see?
[03:09:38] <joannac> What's your desired result?
[03:09:40] <safani> Product.find({'categories.id': req.params.id}).sort("categories.$.sort")
[03:10:16] <safani> a product can have many different categories.. eaach category can have a sort order
[03:10:39] <joannac> I suggest you pastebin some sample documents, and what your desired output is
[03:10:53] <safani> ok hold on one sec
[03:11:22] <jackh> Hi, Derick, joannac, mstearn, Number6: I'm from IBM, since IBM Power Servers are big endian, we made some modifications to make mongodb work on our server. Now since lots of customers want to make our trial sourcecode supported by mongodb community. So how to make it merge into mongodb mainstream?
[03:13:13] <safani> joannac: http://pastebin.com/kxXD9Aj6
[03:14:33] <safani> i want to be able to return products in a category and then sort them based on the categories.sort number determined by the categories.id
[03:20:22] <safani> joannac there are several subdocs in the categories key
[03:20:37] <safani> i want to sort by only one each time
[03:20:42] <safani> not all of them
[03:20:45] <joannac> What does sort("categories.$.sort") mean?
[03:20:53] <safani> i don't know....
[03:21:07] <safani> thats how you set a value so i figured..
[03:21:12] <joannac> When I said actual documents, I mean actual documents. Not a schema
[03:22:39] <safani> heres a sample document http://pastebin.com/WZtDEWct
[03:23:08] <joannac> And what do you want the result to be?
[03:23:41] <safani> i cannot sort on a specific categories.id using categories.sort
[03:24:15] <safani> I want to go into a specific category and pass that id as the parent. I then find the children of the category and all products contained
[03:25:10] <safani> i want to sort the products based on one of the subdocuments in "categories" the one i will sort on is the one that matches the categories.id with the current parent category id i am in
[03:35:30] <safani> any thoughts
[03:36:41] <safani> i figured it out, im just going to sort it with js after i get my results from mongo
[03:37:06] <safani> let me know if you know a way to sort the on the subdoc value!
[04:19:03] <tjj> Has anyone had any good / bad experiences with TokuMX? Seems to add a lot of good features to mongo
[08:32:11] <[AD]Turbo> hola
[09:30:57] <sinclair|work> is anyone here familiar with redis?
[09:31:25] <Zelest> that's like joining #mysql and ask if anyone has heard of PostgreSQL :)
[09:31:40] <sinclair|work> well, there doesn't seem to be a redis channel on here
[09:31:51] <sinclair|work> also, most people who deal with mongo might know of redis
[09:31:59] <sinclair|work> but yeah
[09:32:05] <Zelest> that doesn't change my statement though. :)
[09:32:08] <sinclair|work> Zelest: do you know about sqlite?
[09:32:13] <Zelest> I do not. :/
[09:33:01] <sinclair|work> Zelest: and i know RAM
[09:33:43] <sinclair|work> Zelest: are replication sets manadatory when working with Mongo?
[09:34:04] <Zelest> nope
[09:41:32] <Nodex> sinclair|work what's wrogn with the official redis channel on freenode called "#redis" ?
[09:41:59] <sinclair|work> Nodex: oh
[09:42:04] <sinclair|work> Nodex: yes, i see it
[09:42:23] <sinclair|work> Nodex: took a while for the user list to appeat
[09:42:25] <sinclair|work> *appear
[09:42:37] <Nodex> :)
[09:42:42] <Zelest> Nodex, morning :)
[09:42:46] <sinclair|work> Nodex: how goes the node?
[09:43:35] <Nodex> morning and not bad :)
[09:46:04] <Zelest> also, the daylight saving bug seems gone :D
[09:46:08] <Zelest> my TTL index worked this time :D
[09:46:48] <Nodex> nice
[09:46:59] <sinclair|work> Nodex: done much with web sockets?
[09:47:21] <Nodex> yeh
[09:47:34] <sinclair|work> Nodex: scaling them?
[09:47:42] <Nodex> the "node" in my name doens't mean I work with nodejs a lot fyi
[09:47:56] <sinclair|work> Nodex: fair enough, but ive seen you in nodejs
[09:48:12] <Nodex> not since I was banned for life ;)
[09:49:24] <kali> "NotNodeX" ?
[09:49:32] <Zelest> I think someone should add websockets to linuxjs :(
[09:49:38] <Nodex> f*** the idiots in that chan, they're all arrogant hipsters
[09:49:44] <Number6> ¬Nodex :-P
[09:49:50] <Nodex> !Nodex
[09:50:05] <Nodex> ./[^Nodex]/$
[09:50:17] <Nodex> it was capitalised, not sure how that changed
[09:50:24] <Nodex> I did reinstall my BNC
[09:50:27] <Zelest> hehe
[09:50:55] <Zelest> :D
[09:51:46] <sinclair|work> NodeX: haha
[09:51:47] <NodeX> gotta investigate a mongodb problem :/
[09:51:56] <sinclair|work> NodeX: i was banned too
[09:52:11] <NodeX> a weird installation which won't start properly is somehow causing a segmentaion fault in php
[09:52:13] <sinclair|work> NodeX: i didn't even do anything
[09:52:49] <NodeX> sinclair|work : I told an OP to die slowly because she PM'd me without asking after going mad at me for telling someone to not PM me without asking becuase it's rude
[09:53:06] <sinclair|work> NodeX: Nexxy?
[09:53:10] <NodeX> yeh
[09:53:49] <sinclair|work> NodeX: i told bnoordhuis to not PM me, and made a snarky remark, BANNED FOR LIFE!
[09:53:59] <NodeX> it's not a very helpful channel anyway, people just want to show of their l337 js skills and prove they're better than you
[09:54:16] <sinclair|work> well, there were a couple of people in there i associated with
[09:54:23] <sinclair|work> not the ops,
[09:54:43] <NodeX> tbh I quite like Node, it mixes nicely with mongo, redis, solr
[09:54:58] <sinclair|work> NodeX: im using the ws socket library
[09:55:02] <NodeX> made a pretty sweet API from it for suggestive lookups
[09:55:06] <sinclair|work> NodeX: just writing a redis backplane thing
[09:55:25] <NodeX> I use socket.io mainly in my app for some realtime stuff
[09:55:26] <sinclair|work> im a bit new at redis, but i think i found the answer
[09:55:35] <NodeX> redis is pretty cool
[09:55:51] <NodeX> somewhere between memcached and mongodb
[09:55:53] <sinclair|work> NodeX: yeah, the pub/sub stuff is pretty slick
[09:56:04] <NodeX> pretty fast, I use it as a cache
[09:56:16] <sinclair|work> nice
[09:56:43] <sinclair|work> i might write a .net clone of it, seeing as they dont want to release an official windows version of it
[09:57:32] <sinclair|work> NodeX: actually, ive been working out webrtc of late
[09:57:45] <sinclair|work> NodeX: peer to peer in the browser
[09:57:59] <NodeX> webrtc looks promising I must say
[09:58:54] <sinclair|work> NodeX: its a bit new, and the learning curve is a little steep
[09:59:06] <sinclair|work> and the browser implementations don't currently agree
[09:59:16] <sinclair|work> but if you are targetting a single client, its awesome
[10:28:33] <NodeX> sinclair|work : i don't have the luxury of targetting a single client unfortunatley
[10:28:42] <sinclair|work> NodeX: yeah
[10:28:53] <sinclair|work> still, would be nice to fluff around on a demo
[10:29:02] <NodeX> I do however have the luxury of not supporting IE at all :D
[10:33:04] <Zelest> no one should support IE tbh
[10:33:21] <NodeX> ++
[10:33:23] <Zelest> the browser should support the web.. the web shouldn't support the browsers, it should follow the standards.
[10:33:40] <NodeX> correctamundo
[10:37:15] <sinclair|work> NodeX: oh c'mon, IE has webgl now
[10:37:22] <sinclair|work> NodeX: doesn't have webrtc tho
[10:37:30] <sinclair|work> but webgl is ALL g
[10:38:30] <NodeX> when IE get more on board with standards they will get supported
[10:38:52] <NodeX> if more developers acted this way then they would have no choice but to be more standardised
[10:39:23] <NodeX> + it's not like the old days where it was a choice between IE and Firefox - there are now a lot more choice and reason to NOT use IE at all
[10:41:12] <Zelest> just take something as "simple" as a regular AJAX call in IE..
[10:41:21] <Zelest> you have to hack your JS to support IE :S
[10:41:40] <Zelest> and if the JS fails, IE just say "error" and gives you no debug information what so ever.. -_-
[10:41:43] <Zelest> </3 IE
[10:43:17] <NodeX> Microsoft have a certain pre 2008 arrogance about them where they think they rule the software world
[10:43:53] <NodeX> it shows with Winblows 8 where they think they can just change core things (that make winblows usable) and people -must- put up with it
[10:46:28] <Zelest> well, to be fair, so did Apple do long before they became mainstream :)
[10:46:43] <Zelest> but then again, people liked it and embraced it.. that's not the case for most Microsoft changes.
[10:46:54] <Zelest> they just change things for the sake of being different.. not improving their products.
[10:47:16] <Zelest> Windows is good for gaming, that's it.
[10:47:28] <Zelest> If you're not a gamer, you have no reason what so ever to run Windows.
[10:47:39] <kali> Zelest: mmmm... office ? :P
[10:47:48] <Zelest> iworks? libreoffice?
[10:47:50] <Zelest> vim? ;)
[10:48:40] <Zelest> also, isn't that even available on the web? like, web-based office
[10:48:59] <kali> Zelest: excel has more than a few features that no other spreadsheet compete with
[10:49:06] <kali> Zelest: and i'm not even talking about performance
[10:49:16] <kali> Zelest: and don't believe i'm a m$ fan :)
[10:49:39] <Zelest> yeah, sure it does
[10:49:51] <NodeX> excel is pretty cool piece of kit
[10:50:01] <Zelest> wow
[10:50:07] <Zelest> google docs have improved a LOT
[10:51:12] <Zelest> just checking "word" though
[10:53:09] <kali> yeah, the "word" ersatz is ok
[10:53:19] <kali> the spreadsheet is something else though
[10:53:56] <kali> you should see what our accountant makes excel do, it's frightening
[10:55:25] <Zelest> hehe
[10:55:52] <Zelest> all I do is making "forms" where I can fill out shit and make it sum and calc stuff for me..
[10:55:58] <Zelest> and the online table one seems to handle that :)
[11:13:08] <sooraj> hey
[11:13:52] <sooraj> anyone knows how to recover mongodb data if repair fails ?
[13:06:10] <angasulino> http://pastebin.ca/2472476 <-- is this the right way to do a @Document if I want to access the id?
[13:07:01] <cheeser> is which part right? the getter?
[13:09:24] <angasulino> cheeser, getter and setter for the id
[13:09:32] <angasulino> I'm getting the data right, except for the id
[13:09:57] <cheeser> that's how you do it with morphia (with which i'm familiar). i don't see how spring-data would be any different
[13:10:30] <angasulino> cheeser, thanks, I'll try a couple more things, I just wanted to check if there was something obviously wrong
[13:11:13] <cheeser> not that I can see. you might have a timing issue in terms of when you're trying to get it vs when spring data sets that.
[13:59:04] <angasulino> solved, I just avoided specifying the collection, since I wasn't specifying it quite right everywhere, and the default is fine.
[14:12:48] <ebragg> good morning folks
[14:13:24] <eldub> Is there a script that I can run to test my replicaset? something that will create/write to a db and I can test node failures
[14:13:41] <ebragg> I'm looking for a way to query an Array of objects for any of the sub-fields. Is there a way to have MongoDB flatten the Array into a string via something to the effect of: array.toString()
[14:15:01] <ebragg> I know I can specify {"array.field":{$regex:'.*searchterm.*'}} and get that to work for a specific field, but I'm looking to do the same thing on anything contained within the array
[14:48:50] <tute666> hi. does anyone know what happened to analytica, the BI solution based on mongo?
[14:55:06] <dbasaurus> Greetings… I was wondering if anyone knew how sharding in MongoDB effects indexes. For example, when a chunk is moved to a new server does it have to rebuild the entire index?
[15:06:59] <eldub> II'm trying
[15:07:28] <eldub> I'm trying to run the mongo-perf tool but am not seeing any results.
[15:07:58] <eldub> the command I'm running: python runner.py --nolaunch --port 27017
[15:15:09] <kali> dbasaurus: each chunk has its index
[15:15:28] <kali> dbasaurus: i think the chunk is copied without the indexes, and the indexes are built on the receiving node
[15:27:53] <dbasaurus> thanks kali
[15:31:04] <sooraj> hey
[15:31:09] <sooraj> i am trying to do a mongodump
[15:31:19] <sooraj> but it shows segmentation fault (core dumped)
[15:31:23] <sooraj> any advice ?
[18:14:05] <eldub> question
[18:14:11] <eldub> I have a 3 node replica set going.
[18:14:54] <eldub> Should I be setting up a VIP somewhere? How will the client know if node A goes down to start writing to node B?
[18:20:35] <bjori> eldub: the drivers do that
[18:21:01] <bjori> eldub: VIP (or any sort of load balancing) is verymuch discouraged as it can confuse the drivers
[18:21:19] <bjori> eldub: when you connect to the replicaset, you specify multiple servers in the connection string as a "seed list"
[18:21:41] <bjori> eldub: the drivers will then go on a hunt for any other member of that replicaset and maintain a connection to those servers too
[18:21:46] <bjori> (using replicaset discovery)
[18:22:04] <bjori> eldub: then they ping the servers regularly and maintain a full overview over the replicaset status
[18:22:20] <bjori> when the primary becomes unavailable the driver will detect that and kill its connection to it
[18:22:37] <bjori> then the replicaset memebers themselves will call for an election and vote on a new primary
[18:22:51] <bjori> which the driver will then soon discover, and route all writes to that server
[18:23:26] <bjori> eldub: do note; during the election process (15-60 seconds window, typically) no writes will be allowed, and if you do try to write you'll get an exception thrown
[18:23:44] <bjori> there is no "write-queue during election", your application needs to handle that
[18:25:34] <eldub> bjori is this something that's setup on the replica set side or from the client
[18:26:17] <eldub> I know the replica set votes on a new primary, just wondering if there's any configuration that needs to be done for the drivers to ask 'who is primary'
[18:26:44] <cheeser> the drivers should discover master automatically given a list of servers to connect o.
[18:26:47] <cheeser> to
[18:26:55] <cheeser> the java, for sure, will autoselect the master.
[18:27:10] <cheeser> or you can just run mongos in front of it all and talk to just it.
[18:27:32] <bjori> eldub: there is no configuration...
[18:27:48] <bjori> eldub: all you have to do is to pass the driver a list of servers
[18:27:59] <bjori> eldub: depending on the driver, you may have to provide the replicaset name too
[18:28:06] <bjori> eldub: everything else just works
[18:28:09] <eldub> ok
[18:28:44] <cheeser> magic!
[18:28:54] <eldub> :) ty
[18:28:55] <bjori> eldub: when you want to get down and dirty, there are some intervals and timeouts that you can fine tune.. but in general there is no need
[18:28:58] <bjori> :)
[18:29:21] <eldub> I am new to this so I didn't really read up on the drivers part
[18:29:26] <eldub> It makes sens now
[18:29:29] <eldub> sense*
[18:29:34] <bjori> which language are you planning on using?
[18:29:48] <eldub> server side is setup. let the driver know about the list of server, it does the query on it -- server responds -- client adjusts.
[18:30:05] <eldub> bjori python I believe -- but don't quote me. I'm just setting up the replicaset
[18:30:28] <bjori> :)
[18:30:41] <eldub> Which was a lot easier than I had expected.
[18:32:14] <bjori> excellent :)
[18:35:22] <astropirate> I have a doc structure like this: { subDoc: {rating: 55 } } I have a collection of documents with this structure. how can I sort by the "subDoc.rating" field?
[18:36:00] <cheeser> .sort({ "subDoc.rating" : 1 })
[18:36:15] <astropirate> ohh :p thanks cheeser
[18:36:34] <cheeser> np
[19:27:57] <dbasaurus> I am looking at the change log and I noticed that the balance is spending 163,500 ms trying to contact the two shard. What could be causing this? Also, I noticed that the balancer is spending 165,000 ms replaying events that happened to these documents during the copy. Do these times seem reasonable?
[19:45:20] <edude03> Hey guys, I accidentally inserted a bunch of documents with the ID as a string instead of an ObjectId, is there a way to fix this?
[19:45:42] <cheeser> are those IDs referenced anywhere?
[19:46:24] <edude03> Yeah I have something like Posts hasmany (non embedded) Comments, and the comment IDs are screwed up
[19:46:44] <cheeser> so other documents are references those ID values, then.
[19:48:01] <edude03> right
[19:50:08] <cheeser> well, fixing those documents is easy enough.
[19:50:26] <cheeser> you'll have to delete those with the bum IDs and rewrite them using ObjectIDs
[20:08:28] <ccmonster> how do I reverse the query result set order, so that I am getting the last items put into the db back ... first?
[20:20:51] <joannac> If you're using normal ObjectIDs, they encode timestamp, so you can sort.({_id:-1})
[20:22:22] <OftenBob> I have a bit of a mongo shell vs. line protocol question: I'm trying to perform a map/reduce across "invoice" records and automatically include details from other collections in the result.
[20:23:08] <cheeser> m/r works on one collection only
[20:23:24] <OftenBob> I.e. for one value I want the title, reference ID, and actual ID of a "Job" (db.Jobs.findOne({_id: value.j}, {_id: 1, t: 1, r: 1})) to be returned with the invoice. Works when map/reducing from the interactive shell, not so much over pymongo.
[20:24:19] <OftenBob> So, no way to do that? Wanted to save some roundtrips. :(
[20:24:34] <cheeser> not that i'm aware of, no.
[20:25:16] <joannac> How do you do that in the mongo shell?
[20:26:21] <OftenBob> Testing the map function against records by hand in the shell.
[20:26:30] <OftenBob> var mapFunc = function() { ... }
[20:26:36] <OftenBob> mapFunc.apply(somerecord)
[20:30:09] <OftenBob> Ah well, time for three additional roundtrips per map/reduced result. :/
[20:32:01] <flatr0ze> a QQ: I'm storing files in Mongo... the binary blob's being sent to the server via HTML5 file uploading mechanism. I'm then sending it back to the clients who want to see / download that file, they encode it to base64 using window.btoa (or a fallback js function for old browsers)... What would be better resource-wise: convert to base64 _before_ sending/storing the file or each time the client gets the file? I really don't
[20:32:44] <flatr0ze> Oh, and I'm using data-URI's for both images and download links.
[20:39:25] <OftenBob> If the data is most often used in a base64 format, storing it that way may be beneficial if processing power is less abundant than storage space.
[20:40:18] <OftenBob> If you're serving your BLOB data directly over HTTP (i.e. as a result of a user clicking a link or an AJAX call) than you can store it as gzip-compressed base64-encoded and serve the gzip-compressed data directly.
[20:40:35] <OftenBob> (Which saves the additional overhead of your front-end web server dynamically recompressing the same thing over and over.)
[20:40:51] <cheeser> +1
[20:41:09] <cheeser> you just have to make sure the webserver doesn't try to regzip it.
[20:41:15] <OftenBob> Line compression will take more processing power than base64 anyway. ;P
[20:42:35] <OftenBob> You'll also need to handle the possibility of 'gzip' not being in the requests's Accept-Encoding and manually uncompressing it before delivery for browsers that suck that badly.
[21:13:53] <MarkAValdez> Anyone know why you cannot create a scope over a field like SKU using mongoid that can search over purley numeric data values? It works if atleast one char is a letter even if you define the field to be explicitly a String?
[21:14:06] <MarkAValdez> In Rails ^ ?
[21:37:30] <Ramone> hey all? I've got a big suite of automated tests for my app and they hit mongo pretty hard? my connection drops part way through, and I'm trying to figure out why? can anyone give me an idea on how I might further debug?
[21:38:10] <Ramone> weirdly they run fine on macos but not ubuntu? but maybe that's just a coincident, because I only tested with 2 of each
[21:38:31] <cheeser> run mongostat and see what's going on up to the point of disconnection
[21:39:22] <Ramone> thanks?. good idea
[21:40:07] <dbasaurus> I just ran db.runCommand( { serverStatus: 1, workingSet: 1 } ), but I am not sure how to read these results…
[21:40:30] <dbasaurus> anyone know what measurement is used?
[21:40:35] <dbasaurus> "workingSet" : {
[21:40:35] <dbasaurus> "note" : "thisIsAnEstimate",
[21:40:35] <dbasaurus> "pagesInMemory" : 147285,
[21:40:35] <dbasaurus> "computationTimeMicros" : 39672,
[21:40:35] <dbasaurus> "overSeconds" : 681
[21:40:35] <dbasaurus> },
[21:40:42] <cheeser> use a pastebin, please
[21:43:59] <dbasaurus> <script src="http://pastebin.com/embed_js.php?i=w1D0tZzx"></script>
[21:44:21] <joannac> dbasaurus: http://docs.mongodb.org/manual/reference/command/serverStatus/#server-status-workingset
[21:44:26] <dbasaurus> http://pastebin.com/raw.php?i=w1D0tZzx
[21:45:13] <dbasaurus> Is this > "pagesInMemory" : 147285, that actual number of pages or bytes?
[21:45:22] <joannac> dbasaurus: read the link I gave you
[21:45:30] <joannac> It's in pages.
[21:47:02] <dbasaurus> thanks… didn't see that
[23:42:44] <Ramone> hey can anyone tell me why my # of connections would top out at ~800 and then drop to 1? ulimit is set to 25000? anything else to check?