[00:51:50] <charity> the command is "c.ensureIndex({"channels": 1, "_created_at": 1}, { background: false})"
[00:52:09] <charity> is there a trick to this? i thought creating indexes on a secondary was the recommended practice.
[01:07:05] <joshua> charity: When we had to do emergency index maint one time 10gen had us change the mongod settings to remove it from the replica set and then add the index
[01:21:27] <charity> thanks joshua, i really thought there was a way to add it on a secondary
[01:34:53] <joshua> charity: Basically we changed the port so the other servers wouldnt connect, disable the replica set option so it starts as a single mongod and then modify the data
[02:57:32] <EricL> If I create a hashed index (as in {foo: "hashed"}, do I get the same benefit as if I create an index of {foo: 1}?
[03:32:33] <Senor> what filesystem dpes mongodb use?
[03:41:24] <groundup> The PHP MongoCursor should implement Transversable
[04:06:52] <mmlac-bv> Can any one reproduce this? mongo shell does NOT use indexes when used like this: db.foo.find( {"$query":{…..}, "$orderby":{…}) but does use the indexes when used like this db.foo.find({…}).sort{…})?!
[04:22:19] <mmlac-bv> can someone tell my why this is indexOnly: false? http://pastebin.ca/2423815
[04:27:17] <mmlac-bv> if I limit to 10 nscanned and nscannedObjects are 10
[04:37:35] <Senor> what filesystem dpes mongodb use?
[05:54:07] <RoboTeddy> is there a difference between $set: {"key1": {"key2": value}} and $set: {"key1.key2": value}?
[06:40:00] <jkbbwr> My boss asked me to de-normalize a load of sales data and store it in just two tables, for querying later for sales analysis and reporting. Could NoSQL specifically mongodb provide a better solution?
[06:40:12] <jkbbwr> Im not 100% sure, And I don't want to suggest the wrong thing
[06:40:51] <cofeineSunshine> when boss is sales person you'll definately suggest wrong thing
[08:11:21] <jonjon> how to change a value in a set? for example, change 'b' to 'c' in this document: {"to" : [ "a", "b" ], "winner" : "0" }
[08:11:29] <jonjon> feel like I'm missing something obvious
[08:14:29] <jgiorgi> using several collections for queue processing, i'm parallel processing so often find_one will return the same item and two tasks get started on a single queue entry, is it possible for mongo to cycle through and not return the same document on subsequent find_one calls (sharing a cursor is not an option)
[08:15:31] <jgiorgi> alternatively is it possible to query "_id" != [list of ids]
[08:15:50] <Nodex> jonjon : look at $set and the "$" (positional operator)
[09:11:17] <schlitzer|work> is here anyone with experience in managing replica sets shards & config servers with puppet?
[09:12:20] <schlitzer|work> doing so in a single host environment(everything on one host for development) is quite easy.
[09:12:52] <schlitzer|work> but i have no clue how one could to this with a real cluster using puppet
[09:44:37] <nyov> I'm looking for an update() query/'spec' that will never match, so an upsert is guaranteed to be an insert. Is there something magic (lightweight) I could use?
[10:01:42] <appi_uppi> I have followed the steps to deploy replica set in our env (Primary, Secondary and an Arbiter) provided in the link http://docs.mongodb.org/manual/tutorial/deploy-replica-set/ ..... but when i run mongodb -f /usr/local/mongodb.conf.. i will get """"forked process: 10914 child process started successfully, parent exiting"
[10:11:39] <idank> I'm talking about the projection operator, not query
[10:13:10] <kali> idank: you need the aggregation framework
[10:13:32] <nyov> Nodex: sorry, basically I wrap the update query in pymongo, and have already checked the document does not exist (find) and at this stage need to make sure the real upsert doesn't match something that appeared in the meantime, and because of what algernon said I can't run an insert there.
[10:13:58] <idank> kali: is it worth it performance wise if the array is <1000 subdocuments?
[10:14:43] <appi_uppi> Nodex: it was run before.. please check this http://pastie.org/8127554
[12:46:13] <oxsav> I want to know if a specific name exists on "authorized_presence" array
[12:54:47] <Infin1ty> I am using 2.2.5 and i took down one of the replicaset members in order to resync it, i deleted all the data files, started the mongod process again, a few minutes after, the machine is not responsive to anything, i can't ssh, tried to connect using ipmi (it won't pass the login), i do see the replication working using mongostat from other machine, it's resyncing, but it looks like something
[12:54:47] <Infin1ty> heavy is blocking the whole machine, anyone else experienced it?
[13:10:11] <nyov> Infin1ty: no, but sure sounds like an i/o bottleneck, disk too slow or dying?
[14:09:58] <EricL> If I create a hashed index (as in {foo: "hashed"}, do I get the same benefit as if I create an index of {foo: 1}?
[14:14:41] <Nodex> if you're going to lookup on hashes then I guess it will
[14:16:04] <Nodex> I would imagine they're used more for hashing an embedded document to save you having to lookup on every field in the embedded doc
[14:19:46] <EricL> Nodex: I meant does it create a hash value and then a range index similar to a :1 or is it a hashed index?
[14:19:56] <EricL> Nodex: Or even a range index of the hashes?
[14:20:09] <EricL> I just want to know what it is I'm working with and if I can reuse it.
[14:54:36] <Nodex> it's a hashed index according to the docs
[16:07:18] <Infin1ty> Running sh.stopBalancer(), it somehow waits on a very old mongos process that is not in use since 1 year ago
[16:11:07] <veetow> would i ever use mongos and a config server for a connection to a db which is not sharded?
[16:11:29] <veetow> 1. is that even possible? 2. any caveats?
[16:12:28] <trajik210> I'm just now in production with a mongo cluster, so still a novice. But isn't mongos /only/ for sharded setups? So if you aren't sharding you wouldn't have a mongos daemon.
[16:13:34] <veetow> well that's really what i'm asking
[16:14:12] <trajik210> I could certainly be wrong, but based on your question I don't think you'd use mongos.
[16:14:13] <veetow> and the reason is that, if say a replica set member needed to be swapped out for a different machine
[16:14:28] <veetow> it would be nice to not have to change my connection string
[16:14:50] <trajik210> Are you using IPs in your connection string instead of DNS?
[16:15:01] <veetow> eg, if i had a mongos running on the local application server pointing at config servers which know about the replica set member changes
[16:16:33] <trajik210> We have a virtualized template for a MongoDB Linux instance. We'd deploy a new machine using the template (no data on the machine).
[16:17:22] <trajik210> We'd give the new instance the same IP as the old server3, add the new machine to the replica set config (telling it to synch from a hidden mongo instance) and then cycle the repl config.
[16:18:05] <trajik210> Once the new instance is done synching it would start to receive queries (depending on the connection string used and the read concern used).
[16:20:08] <trajik210> If I needed to remove a mongod instance and replace it with a new one, I can do it transparently, on a live production environment, without restarting the app (which is Node.js). But, if I wanted to add a new instance, that would require restarts of the Node.js app in order to add a new IP to the connection string.
[16:21:01] <trajik210> This last example would also be transparent as we run Node.js in a load balanced environ. So we'd do a rolling restart on the Node.js cluster.
[17:38:18] <astropriate> How do I get a status of an Insert with the C++ driver
[17:38:52] <astropriate> I am inserting a std::vector<mongo::BSONObj> and it just silently fails
[18:33:23] <clarkk> guys, can anyone suggest a library for mongodb that exposes an API to the client to allow mongodb queries to be sent and collections returned, please?
[18:34:07] <clarkk> the client would be a browser + javascript, probably using backbone
[18:35:07] <dash_> pymongo, what am I doing wrong? http://pastebin.com/GJprZ0BU instead of object with data {u'ok': 1.0, u'value': None} is returned, thanks!!
[18:52:50] <dash_> this is example of syntax I've doing wrong https://github.com/mongodb/mongo-python-driver/blob/master/test/test_collection.py#L1267 now everything works properly, thanks
[19:04:45] <Fallout2man> can someone please recommend me a mongoDB admin tool that works on OSX without me having to hand edit unwieldy config files to point it to my Mongo instance and that doesn't require I install and configure several other pieces of software in a similar way?
[19:05:23] <Fallout2man> Mongo Hub seemed good...but their site is down and I can't get to any binary releases. Just the source repository.
[19:13:22] <Fallout2man> ...so I guess that's a no?
[20:03:34] <nyov> I've been trying to convert a mongodb into a standalone replicaset member, to get the oplog for external fulltext indexing. It seems to work ok, until I restart the mongodb - then it wants to sync with some peers.
[20:04:37] <nyov> None of the sites talking about this setup mention how to disable that, can I force mongo to be primary without some peer?
[20:05:23] <ismell> I want to mapreduce on a ISODate field but I want my granualirty to be a date. How can i strip the time off?
[20:06:25] <trajik210> nyov - are you asking if you can have a single mongo instance be configured in a repl set and be primary (with no other instances in the repl set?)
[20:08:16] <nyov> I want to setup something like this http://derickrethans.nl/mongodb-and-solr.html
[20:08:19] <trajik210> I don't think you can do that.
[20:08:50] <trajik210> Any time I've had a repl set drop to 1 member, it automatically becomes a secondary until at least one other instance comes online. I think you'd have to reconfig the repl set to do what you are saying.
[20:09:15] <nyov> it seemed to work for some dudes here, maybe in an earlier mongodb version? https://loosexaml.wordpress.com/2012/09/03/how-to-get-a-mongodb-oplog-without-a-full-replica-set/
[20:09:47] <trajik210> Could be. I'm using 2.4.3 and 2.4.4. You can't have a primary instance in a repl set with one member.
[20:10:09] <trajik210> You'd likely have to drop the instance you are interested in from the repl set using mongo shell.
[20:11:12] <trajik210> If you only need the oplog for synching purposes could you not just ensure your instance is caught up then drop it from the repl set? It'd effectively become its own instance at that point.
[20:12:28] <nyov> trajik210: nah, I don't need it for syncing. I need the tailable oplog to get updates/inserts/deletes into elasticsearch
[20:21:29] <ismell> I'm doing a mapreduce like so emit(Date.UTC(this.created_at.getUTCFullYear(), this.created_at.getUTCMonth(), this.created_at.getUTCDay()), 1);
[20:21:36] <ismell> and I want to convert back to an ISODate
[20:23:33] <lucasliendo> Hi everyone ! Have someone built a mongo cluster ? I have successfully built a mongo cluster, and now have some questions that I don't seem to find answers from the docs
[20:23:48] <trajik210> Lots probably have lucasliendo.
[20:24:44] <lucasliendo> to be concrete this is the things that I'm still doubtful...
[20:25:41] <ismell> nyov: that doesnt say how to cast a timestamp to an ISODate
[20:25:53] <lucasliendo> my cluster is completly empty, so now I would like to put in there a huge collection (this huge collection is a dump from another mongo, which is running stand-alone)
[20:26:36] <nyov> ismell: isodate is just a helper function. you can use the same methods from a normal JS Date() object
[20:27:28] <lucasliendo> do i have to worry at all for anything but the shard key ? I mean, can I just fill my cluster with mongoimport ? What about the indexes from the original collection (are they lost) ?
[20:27:58] <trajik210> lucasliendo: are you talking about a repl set?
[20:28:15] <lucasliendo> well the cluster is built upon 3 replica sets
[20:29:00] <trajik210> lucasliendo: Can you explain more?
[20:29:43] <lucasliendo> sure, I have 6 machines, the cluster is built with 3 replica sets (rs1, rs2 & rs3), and each replica set is using 2 machines
[20:30:31] <lucasliendo> also have 3 mongo config servers among this machines, and only one of them is running mongos
[20:33:13] <lucasliendo> the collection that i'll importing has more o less 33M documents
[20:34:21] <trajik210> I don't have a similar setup as you as I'm just repl sets w/o sharding. We use mongoimport to bulk import a lot of documents regularly.
[20:34:31] <lucasliendo> so, basically I would like to know the best way to do this...that's why I asked in first place about the shard key...from mongo docs it seems the only relevant thing...am I missing sometring else ?
[20:35:16] <lucasliendo> ok I see, but how did you manage when you make the first import ?
[20:35:22] <trajik210> lucasliendo: You should be okay if all you are doing is importing an existing data set into an empty system.
[20:36:11] <trajik210> lucasliendo: You mean if you run multiple mongoimport processes and some of the docs are in both import runs?
[20:37:37] <lucasliendo> no no, just one mongoimport. I mean is this enough to import the whole collection : 1 -> Create an empty db. 2 -> Select the shard key. 3 -> run mongoimport
[20:38:14] <lucasliendo> is just that or there something else ?
[20:41:55] <lucasliendo> ok, well...that was my doubt after all... :D. Just one more thing, while you were importing, how did you check that everything was ok ? I mean the import was running ok against the whole cluster...
[20:42:06] <trajik210> lucasliendo: You can use mongotop or mongostat to monitor the instance (from another Terminal) as you are importing.
[20:42:45] <lucasliendo> ok, well ! Thanks a lot for your time trajik210 !
[20:43:45] <trajik210> lucasliendo: You're welcome. Have a good one.
[21:07:53] <BlakeRG> Hello, i've got a collection that i want to create a unique index on, my document looks like this: { "_id" : ObjectId("51ddc984375e7d25b6000002"), "id" : "98374374948324", "first_name" : "Joe", "last_name" : "Blow" }
[21:07:59] <BlakeRG> db.pooballs.ensureIndex( { id: 1 }, { dropDups: true } ); does not seem to do the trick
[21:22:54] <crudson1> BlakeRG: The docs there imply that dropdups implies unique. Looks like it needs updating. Seems to be right here: http://docs.mongodb.org/manual/core/indexes/#drop-duplicates
[21:23:34] <BlakeRG> crudson1: yes i just found that a second ago, the tutorial makes you think it's implied
[21:24:05] <BlakeRG> is there a way to submit patches to the docs?
[21:24:10] <crudson1> BlakeRG: I'd submit a documentation jira ticket