[00:49:30] <mraa0> Hey, I hope you all don't get this all the time, but what is the preferred mongo gem for non-rails environments?
[00:50:22] <mraa0> I don't need a full fledged datamapper, just something simple to get stuff in and out of mongo. debating mostly between moped and mongo-ruby-driver
[00:50:38] <mraa0> not sure which is better at the moment.
[01:21:22] <feathersanddown> any visual client for mongodb
[03:29:36] <meonkeys> I'm looking at sys calls by mongod on Ubuntu 14.04.1 LTS. mongod (solo, no replset nor shards) stays at about 1% CPU when idle. No clients are connected. strace shows constant calls like this: "select(11, [8 10], NULL, NULL, {0, 10000}) = 0 (Timeout)"
[03:30:06] <meonkeys> anyone know what mongod might be doing?
[03:31:16] <meonkeys> I've strace'd mongod 2.4.9 and 2.6.4, both call select(2) about 30 times per second.
[04:09:07] <meonkeys> make that 50 times per second.
[04:35:27] <meonkeys> (posted at https://groups.google.com/d/msg/mongodb-user/ZCSe-KOJyM0/FaEvzhPR9o8J )
[08:32:44] <krion> the thing is, on the replicaset i compare both directory and one of the nodes had more "collection.[x]" than the other
[09:06:28] <krion> furthermore, i've an arbiter running on the secondary node... of each replicaset
[09:12:31] <krion> should i also stop mongos and mongo arbiter process for that procedures -> http://docs.mongodb.org/manual/tutorial/resync-replica-set-member/#automatically-sync-a-member
[10:46:35] <fps> let's say i have a collection models where each document contains an array refPics which has objects containing a field original_image_name
[11:48:32] <izolate> is it recommended to use the default directory /data/db on osx? or should I use one in my home dir?
[12:20:41] <cheeser> for dev work? i just use the homebrew install, tbh.
[12:25:27] <jenny__> I would like to apply the aggregate function on an array, but the function of aggragazione is applied only to the collection; But I have to apply the aggragazione to an array that is the result of a query. How can I do?
[12:28:24] <jenny__> I would like to apply the aggregate function on an array, but the function of aggregation is applied only to the collection; But I have to apply the aggregation to an array that is the result of a query. How can I do?
[12:49:52] <jenny__> mmm I do not know ... my problem is that I am using php and I do not know how to do it ... not you give me a link where there are no such examples with php?
[13:06:17] <krion> is that a normal behaviour that STARTUP2 state goes for very long, when i'm resyncing a nodes on a replicaset
[13:06:39] <krion> i did that http://docs.mongodb.org/manual/tutorial/resync-replica-set-member/#automatically-sync-a-member
[13:10:15] <feathersanddown> in morphia, I get this: java.lang.NoClassDefFoundError: com/mongodb/DBObject, I'm using maven to build my project , why? my code works fine in my dev environment (netbeans) but when using only svn files, packagin with maven using http://maven.apache.org/plugins/maven-assembly-plugin/usage.html now it won't connect to DB
[13:41:52] <izolate> is there any way to test the connection string on the command line?
[13:46:05] <Derick> izolate: afaik, the shell doesn't not support connect strings like the drivers do
[14:33:38] <jb__> I'm trying to code an agregate in which each item have a value based on its position in the result, do you think it's possible to get pthe position in the projection?
[15:17:45] <krion> what's the best way to elect a secondary to primary in a replicaset ?
[15:17:56] <krion> can i just shutdown the primary ?
[15:18:23] <Derick> you can step it down, to trigger an election
[15:27:57] <Cheekio> I'm having trouble with namespace/collection counts. I increased nssize (and I can confirm *.ns files are larger), but when I poll the total number of namespaces I only get small increases before running into "hashtable namespace index max chain reached: 1335" errors in the log
[15:29:10] <Cheekio> I set nssize from 16 megs to 2 gigs, and instead of getting a 10,000x increase in the number of namespaces I got about 1.4x
[15:29:19] <Cheekio> Anyone know how to debug this?
[15:31:04] <skot> did you create the database after you changed the option? Once the files are created the option doesn't retroactively change them.
[15:32:07] <Cheekio> I dropped / reloaded the dbs from backups
[15:35:18] <Cheekio> Again, the .ns files appear to be 2 gigs, it's just that I'm only getting 35k namespaces before running into errors
[15:38:32] <skot> How are you checking the count of namespaces? Also, do a lot of your namespaces have similar names?
[15:42:12] <Cheekio> The namespaces are all full uuids, so similar to a human
[15:42:20] <Cheekio> I just check via db.system.namespaces.count()
[15:42:33] <Cheekio> I hear that indexes are supposed to take up a big chunk of the nsfile, but I wouldn't know how to check that as well.
[15:44:04] <skot> system.namespaces includes the index entries, so count covers that. Just do a find on system.namespace and you will see the indexes which append $idxName at the end.
[15:44:25] <skot> a namespace is either a collection or an index
[15:45:15] <Cheekio> In that case I'm getting an accurate count, so I think there's a lurking element here I'm not familiar with
[15:46:29] <Cheekio> It's a test box, if I flatten, reset, and restore, is there anything else I should do to be sure I'm not carrying over bad ns information?
[15:56:31] <skot> no, deleting the database (and corresponding ns file) is all that is needed.
[15:56:59] <skot> btw, the error message you got is not about it being full, just that you have reached the max chaining depth.
[15:57:34] <skot> that number, 1335 is 5% of the size; so increasing the ns files will result in a larger value, or depth, for chaining
[15:58:03] <skot> so it could be that your collection names result hashes that chain too deeply
[16:00:18] <skot> If you have a script which can reproduce the collection names then I can take a look; please post to pastebin/etc.
[16:06:54] <Cheekio> I have the output of db.system.namespaces.find()
[16:09:04] <d-snp> hey guys, I get the error that my aggregate invocation creates a document of over 16mb, this is my query: http://pastie.org/private/5lhfpupss3liuasdbij8g
[16:09:36] <d-snp> basically what it does is create buckets of 10ms, counting how many requests go in each bucket
[16:10:19] <d-snp> I think the result should be less than 16mb, so if there's a point that it could reach 16mb it would be at the first project
[16:10:50] <d-snp> the limit is typically around 100000 so if in the first project it would try to project the entire thing into a single document that would fail
[16:13:00] <d-snp> I imagine a worst case would be that there's 100.000 10ms buckets, each with 1 request in it, that would mean each entry is 160 bytes, that still sounds a bit large to me
[16:19:50] <d-snp> reducing limit to 10000 fixed the problem
[16:19:57] <d-snp> I still think it shouldn't create such a small document though
[16:19:58] <Cheekio> @skot, I'm having trouble understanding what you mean by max chaining depth.
[16:27:26] <flu> Is it possible to use Projection and the $meta operator via the perl MongoDB driver? http://docs.mongodb.org/manual/reference/operator/projection/meta/
[16:28:46] <flu> The drive docs don't mention it and my attempts to invoke via the MongoDB::Collection find/query methods have all failed.
[16:33:23] <krion> what's the best way to failover on a secondary node in a replicaset ?
[16:59:25] <krion> i'll see tomorrow how to trigger an election (i already read some about step down the primary)
[17:03:48] <Derick> krion: no, with an arbiter is fine
[17:04:03] <Derick> krion: stepping down makes the election go faster than just shutting down a node
[17:05:56] <jonathan_> I have master/slave replication setup. I'm looking for the best way for an auto
[17:05:59] <jonathan_> mated periodic check on the slave side to ensure that replication is actively happening. The info in db.printSlaveReplicationInfo() has the time of the last re
[17:06:23] <jonathan_> plicated op, but there is no way to determine if that is the most recent op from the master?
[17:32:11] <t04D> created a new group, is there a way to disable the deployement of a new mongod process on a server ? I just want to be able to detect my current process (2 agents are already running on this VM)
[18:09:10] <mattblang> Is there any way to set up like a trigger maybe for a one-to-one so that an old association is removed? for example, say an Owner can have a Pet. If that owner gets a new pet, I don't want the old pet to still have the ownerId
[18:12:20] <benjwadams> how can I return a boolean based on whether or not a regex matches?
[19:38:30] <doxavore> it would be swell if there was a way to pass some contextual ID (request ID, etc) that mongodb could include in logs for slow queries. that doesn't exist currently, no?
[19:57:45] <LyndsySimon> Is there a way to convert an ISODate to an ObjectId?
[19:58:14] <LyndsySimon> I know i can do ObjectId.getTimestamp() - need to do the opposite.
[20:07:36] <dgarstang> Is it possible to set up mongo so that localhost access doesn't require auth?
[20:10:34] <cheeser> once auth is configured, i think every connection requires it.
[20:15:02] <sublime> Hello I having a really hard time installing mongodb on a fresh debian7x64 install. I tried following the official doc (http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/) with no succes. Even step 1 fails. I had to do "wget http://docs.mongodb.org/10gen-gpg-key.asc -O - | apt-key add - " instead. On step 4 I get a dependency error
[20:18:31] <sublime> I am getting these errors http://paste.debian.net/124892. Can anyone help
[20:19:20] <ehershey> sublime: what happened when you tried to run step 1 from the documentation?
[20:19:35] <ehershey> also what command did you run that's giving you that error and did you run any other commands differently from the docs?
[20:20:08] <sublime> Im did everything acroding to doc except adding the gpg key
[20:20:26] <sublime> the command was apt-get update && apt-get install -y mongodb-org
[20:22:16] <sublime> for step 1 i get this error http://paste.debian.net/124893
[20:41:43] <dgarstang> So, we sorta wanna have an admin user AND be able to use it locally without auth
[20:43:03] <dgarstang> From the docs .... "The localhost exception allows you to enable authorization before creating the first user in the system. "... ergo creating an admin user removes the option to connect locally without auth
[21:15:15] <Bish> dgarstang, why do you need the admin acc then :3?
[21:16:55] <dgarstang> Bish: for other admin stuff???
[21:22:28] <Bish> dgarstang, can you answer my question maybe?
[21:23:30] <joannac> Bish: why do you want a collection with only indexes?
[21:23:45] <Bish> well.. i want to export the strucutre, without the data
[21:24:04] <Bish> like, when i want to deploy the same system on another machine, ( in this case: testmachine )
[21:24:13] <Bish> i could copy everything, then drop data
[21:24:36] <joannac> wouldn't it be faster just to re-ensure the indexes?
[21:24:39] <Bish> but there has to be a better way
[22:22:06] <Skaag> but I get this: assertion 13 not authorized for query on local.oplog.rs ns:local.oplog.rs query:{ $query: {}, $orderby: { $natural: 1 } }
[22:22:21] <Skaag> it's a replica set with 3 nodes
[22:28:42] <Skaag> I just gave it readWrite@local and readWrite@admin
[22:29:08] <Skaag> I just noticed this in the main mongod.log: Failed to authenticate admin@admin with mechanism MONGODB-CR: AuthenticationFailed UserNotFound auth: couldn't find user admin@admin, admin.system.users
[22:29:15] <Skaag> looks like it can't even find it
[22:32:42] <Skaag> indeed when I run: db.system.users.find() I get nothing
[22:33:55] <joannac> Skaag: okay... so add the user?