[04:34:49] <xpen> we have tree nodes as a replca set without a arbiter
[04:35:35] <xpen> one node's status now is startup2
[04:36:34] <xpen> so, i'm wandering if that because we don't have a arbiter and that startup2 node can not determine which one is primary, and which one is secondary?
[09:23:58] <_Heisenberg_> Hi! I wanted to remove mongod from autostart service on ubuntu 12.04 via sudo update-rc.d -f mongodb remove but the service keeps on running after reboot.
[09:57:26] <rspijker> _Heisenberg_: Is the service called mongodb or mongod?
[10:05:27] <_Heisenberg_> rspijker: the script in /etc/init.d is called mongodb
[10:06:11] <rspijker> okay, I'm not that familiar with ubuntu. Why not just remove the script?
[10:06:29] <rspijker> is it an actual script or just a symlink?
[10:06:33] <_Heisenberg_> rspijker: I'm not sure if I will regret it ^^
[10:06:55] <rspijker> then move it elsewhere for safekeeping...
[10:07:12] <rspijker> it's not like your system is going to break because you moved the mongodb startup script :P
[10:07:18] <_Heisenberg_> rspijker: it's an symlink to /lib/init/upstart-job
[10:07:50] <_Heisenberg_> ok, I#ll move it and see what happens ^^
[10:10:50] <_Heisenberg_> can't believe that bitch is still running
[10:13:06] <_Heisenberg_> ok there is another one at /etc/init/ i'll move it roo...
[11:29:18] <pl2> Hello, I restarted my mongod process to map to a different drive, and it's taking ages to start. Does anybody know what is going on behind the scenes when you use : mongod--dbpath /example/data/path ?
[13:39:02] <bobinator60> is there a standard/simple way to generate data for map clusters from a 2d index'ed field? https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png
[13:41:56] <ragusource> I'm having some trouble with setting up auth on mongodb, I added a user, enabled auth in the config and now i can't login
[13:58:11] <cheeser> 3 more issues to fix on this release of morphia: https://github.com/mongodb/morphia/issues?milestone=2&state=open (#464 is in code review now)
[14:00:52] <smremde> i'm trying to work out why a 2dsphere query is taking so long... anyone around?
[14:04:27] <smremde> but, like i said, the index fits in memory, and i'm expecting 10-20 results - so its only 10-20 seeks on disk, and this thing has been running over 1 hour
[14:04:35] <remonvv> smremde: Fits in memory != is in memory. It'll still have to page the data in. Not that it has to have the entire index in memory to satisfy a query. What do you call "long"?
[14:04:37] <robothands> how to undo rs.initiate please? I ran this on the primary, and then accidentally on the secondary, so now I have 2 primaries
[14:05:15] <cheeser> robothands: not sure you can. i think you'll have to rebuild that secondary.
[14:05:42] <remonvv> smremde: Hours? In that case it's unlikely it's hitting an index at all
[14:06:50] <remonvv> smremde: Did you try the same query in a smaller test set?
[14:07:11] <cheeser> smremde: run explain on your query
[14:07:27] <remonvv> smremde: Hinting to an index that might not be compatible with the query plan needed for your query results in no index use at all.
[14:07:39] <remonvv> cheeser: Doesn't work. Explain performs the query so that'll take hours ;)
[14:07:57] <remonvv> It should have a made where it just publishes the query plan really.
[14:10:03] <drag> remonvv, so if I wanted to keep a log of all such operations, then I need to do it in my own code, correct? Management just wants a log of all commits to our databases just in case something happens.
[14:10:29] <remonvv> drag: I...don't know where to start with that one. Management as in non-technical?
[14:10:50] <remonvv> drag: Every time someone asks a question like that it's usually followed by a bad idea ;)
[14:11:00] <cheeser> and what do they expect to do with that information?
[14:12:04] <remonvv> drag: To answer your question first; yes you'd have to intercept every write and store it somehow. Of course that just moves the problem to that storage solution but..yeah.
[14:12:51] <remonvv> drag: There are *many* problems with something like that. It's much better/easier to use somewhat more standard durability solutions.
[14:13:42] <smremde> could you take a look at http://pastebin.com/dBzfyGWe ?
[14:13:52] <bobinator60> does anyone have a suggestion of how to use a 2D index to generate map clusters, like this: https://developers.google.com/maps/articles/toomanymarkers/markerclustererfull.png
[14:14:07] <drag> remonvv, right now, we are using MySQL with Spring and it has an @Audited annotation that writes operations to an _AUD table. Non-technical management wants to keep this, so I was asked to look into whether or not it was possible and/or how much effort we would need to implement it in our change to MongoDB.
[14:16:34] <remonvv> drag: No problem. Go for replicasets and/or backups. There is an excessive amount of problems with the solution to write to B what you've written to A.
[14:16:51] <remonvv> smremde: It's cool. Happens to the best of us ;)
[14:17:07] <smremde> i have a feeling my index will be much bigger now...
[14:17:25] <remonvv> smremde: Might be, is it sparse?
[14:17:44] <robothands> so, I ran rs.initiate on the secondary node, I've removed mongo entirely and reinstalled, but when I connect to it after installing again, I get "mongod started without --replSet yet 1 document are present in local.system.replset" how to remove this configuration?
[14:29:50] <remonvv> smremde: I don't think there is any such functionality to tune geo indexes. What are you trying to do and what problem do you think it solves?
[14:32:34] <smremde> well, s2cells at the maximum depth are <1cm in size. i dont need that level of accuracy. decreasing the maximum dpeth, and decreasing the maximum cells lower, would probably make my index more efficient :)
[14:34:22] <remonvv> smremde: Okay but I don't think 2dsphere indexes support any such tweaking. I've heard references to Google S2 but I'm not even sure if the current 2.4.x codebase uses that.
[14:35:38] <remonvv> smremde: Just checked the code, it does use S2.
[14:36:31] <remonvv> smremde: Try with a smaller set to see if it hits the index at all (and fix it if it does not). If the index works but it is too big or your queries too slow then start looking at optimization.
[14:43:56] <smremde> remonvv: well I have to rebuild the database now, as I actually messed up the geojson formatting :) i'll start the import tonight
[14:48:19] <remonvv> smremde: Okay ;) Good luck. I'd still suggest a partial set for all testing.
[14:59:50] <smremde> remonvv:where is the fun in that? xD
[15:01:22] <remonvv> smremde: Your definition of fun needs further review.
[15:13:26] <robothands> im in a bit of a hole with replica sets...could someone have a look at the errors here and let me know what you think
[15:13:59] <robothands> primary is fine, secondary won't seem to join and I can't run any commands (like adding user) on the secondary
[15:14:20] <robothands> checked the error and it says its because I need slave=OK....but if it isn't in the cluster, it shouldn't matter?!?
[15:21:01] <Harageth_> Hey so I was just re-reading stuff on locking for mongo and I am slightly confused because it seems excessive that a write operation would put a lock on an entire mongod instance. Did I read that correctly? Or does it just lock a single document.
[15:37:19] <gee_totes> what's the best way to generate a unique id on a document that's suitable to be used in a url?
[15:37:44] <gee_totes> like i want to have my url /record/123 serve up record with id of 123
[15:55:02] <robothands> how do I completely remove mongodb? I've removed via yum, deleted the database directory and config files, yet when I reinstall this node is still in the replica set it was before I removed :(
[15:57:35] <rud> hi, my mongod 2.4.3 (running on FreeBSD 9.1) refuses connections with the following error: can't create new thread, closing connection. googling results seem to imply a problem with max open files or max user processes, but I already have these set pretty high on this production server, plus the system logs don't report any kernel/system limit being reached.. Also, I only have a few internal processes that connect to mongodb, nothing exceeding 5-7 clients at an
[15:57:46] <rud> so, in your opinion, what else could be the cause of such issue ...?
[16:04:08] <robothands> how do I completely remove mongodb? I've removed via yum, deleted the database directory and config files, yet when I reinstall this node is still in the replica set it was before I removed...this seems to be ridiculously difficult to do such a simple thing
[16:15:02] <ragusource> hey guys, When I add roles to a user, I can not login with that user, please help!
[16:49:34] <rhalff> hi, is dot notation for keys undesired?
[16:49:47] <rhalff> I read this post which basicaly states it is not: http://stackoverflow.com/questions/12397118/mongodb-dot-in-key-name
[16:49:59] <rhalff> mongodb allows it but many drivers don't
[16:59:21] <djBuss> Hi, I'm trying to start a database with mongo but I want require authentication. I started reading the docs and it says that I have to access a 'localhost exception' access. Can I create the user through the command line instead?
[17:05:41] <bcows> what is the easiest way to compare multiple values in a sub collection of a document, for instance I have doc->children and I have an search array of type children .... I want to return all documents that have children matching the children in my search array
[17:06:28] <bcows> (children has fields on it like: name, date, etc.."
[17:43:54] <LoneSoldier728> anyone understand why this does not work
[17:58:53] <ashley_w_> i have two 12 core 60GB ram servers with tons of raid5 storage replacing a 4 core 32GB ram server running mongodb. the old server just has the one mongod instance running (2.2.2), and recently there have been some performance issues. wondering if doing sharding (no replica sets) is the best approach with the given hardware.
[18:14:43] <ashley_w_> it's a private database that i'd say has plenty of writes. it's not just store and query. some data might be stored and rarely read more than once.
[18:34:46] <kali> Nodex: have you tried Veep, btw ?
[18:36:35] <ashley_w_> another question: following http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/, it says nothing about having any backend mongod servers running, so sh.addShard(...) fails. i assume i am supposed to already have these running with --shardsvr.
[18:54:17] <Ahlee> i have a small collection, but heavy on updates. It's 100,000 records, but i'm attempting to push through ~20,000 updates to those 100k updates per second, mongo initially has no issues, but will periodically freeze (guessing i'm hitting a buffer limit?). Right now running one instance (no shard, no replica set, just the one instance) - what are first steps to track down issue?
[18:54:48] <Ahlee> My db path is in /dev/shm (ruling out disk commit), memory set is less than a gig, system has 8 gigs of ram
[19:03:44] <Ahlee> so effectively adding an index on a/the field used for the updates would be wise next move is what i'm getting from this (man, that sounds painfully obvious)
[19:07:51] <ashley_w_> i've seen adding an index change an update from taking several hours to just a few minutes
[20:15:53] <Harageth_> So I was reading some of the documentation today and it was saying that when updating a document it locks the entire local instance of mongod.... Did I really read that correctly? That seems kind of overkill to lock the entire local database to update one document.
[21:23:35] <LoneSoldier728> anyone know how to get the inc to work correctly
[22:05:09] <Ontological> I've got collection.name and collection.something.name and when I query the collection by name, I believe mongodb is returning both sets. Can I limit it to NOT return subdocuments or am I just seeing things?
[22:07:08] <bcows> is there an official debian/ubuntu package for the c++ driver ?
[22:18:12] <LoneSoldier728> how to do inc on mongoose
[22:45:22] <bjori> LoneSoldier728: use findByIdAndUpdate instead
[22:45:36] <bjori> LoneSoldier728: you are searching by an id, but that method expects a search criteria :)
[22:47:45] <bjori> bcows: you need to build it yourself from the mongodb source :/
[22:54:20] <caitp> you know what would be kind of cool?
[22:57:50] <caitp> it would be cool if I could just say (in any particular RDBMS or otherwise), "hey, this table is going to need to support pagination, please make some stored procedures automatically to support pagination with a very simple API"
[22:58:42] <caitp> or any other common pattern, really
[23:52:18] <dougb> I'm trying to test something where I have a document that would have sub objects in it, and once I return that object initially, is there a way to search within that sub object?
[23:52:38] <dougb> sorry, return the document initially...after I have found it with a query
[23:53:18] <retran> start over and rephrase question so i'm not confused
[23:56:31] <dougb> sorry, I have this 'User' document, and that document can have the status for multiple placements saved in it, as detailed here: http://pastebin.com/BDeyehv7
[23:56:57] <dougb> Once I return a specific user Document, is it possible for me to query the 'UserPlacements' field for a specific object?
[23:57:15] <retran> when you say "return"... what do you mean
[23:57:29] <retran> i'm guess you're talking about the client application ?
[23:57:39] <retran> from there it's out of the hands of mongodb
[23:57:50] <dougb> when I query the collection based on the _id and it finds the specific document and returns it