[12:31:02] <ErikBjare_> If I want to use a MongoDB database in a clientside app without a full blown instance what's the best way? (I'm using Python)
[12:34:06] <ramesh> Hi, I need help in mongodb.. I have an attribute with timestamp (future date). I should want to update the document when the timestamp reach, any idea?
[12:42:17] <ErikBjare_> ramesh: You need a timer to listen for those times, mongodb doesn't support timers itself so you have to do it on your own.
[14:14:43] <b0bi> Hi, I am trying to use the $gt operator while querying on a string, but the comparison is done only on the first character of the strings, is that how it supposed to work?
[15:32:34] <Neo9> i want to implement encrypt & decrypt operation with any transparent encryption layer for mongoDB..?
[15:32:55] <Neo9> cheeser:i want to implement encrypt & decrypt operation with any transparent encryption layer for mongoDB..?
[15:33:14] <Neo9> cheeser: on the fly encryption for mongoDB.
[15:35:57] <ollivera> After a reboot I can authenticate on secundary ... I stopped the secondary, removed all files from dbpath and started again ... all dbs have been copied .. due the sync ... but user information
[15:36:07] <b0bi> Hi, I am trying to use the $gt operator while querying on a string, but the comparison is done only on the first character of the strings, is that how it supposed to work?
[15:36:19] <ollivera> how can I restore the user information in the secondary so much so that I can authenticate again?
[15:43:47] <cheeser> there is no builtin encryption at rest, yet, in mongodb
[15:45:50] <Neo9> cheeser: i want to create an encryption & decryption layer on disk. then on top of it mongoDB will placed.
[15:46:21] <cheeser> you can use an encrypted file system...
[15:46:21] <Neo9> cheeser: agree. we have some paid solutions for mongo encryption.
[15:46:54] <Neo9> cheeser: But we have linux disk level, block level encryption open source tools.
[15:47:59] <Neo9> cheeser: if we can able to create a layer on top of disk then mongo [as a transperant encryption & decryption layer].
[15:48:31] <Neo9> cheeser: it would be the better solution.
[15:49:40] <Neo9> cheeser: did encrypted file system supports on the fly encryption & decryption.
[15:50:05] <cheeser> yes. that's generally how they work.
[15:50:22] <Neo9> cheeser: i found that we have to manually do the mount & unmount operations for the encrypted file system.
[15:51:42] <Neo9> cheeser: so we have to do some changes too. but don't know which tool is feasible option & how to achieve it.
[15:56:56] <ollivera> any tip to copy users from the primary to the secondary?
[16:45:44] <b0bi> Hi, I am trying to use the $gt operator while querying on a string, but the comparison is done only on the first character of the strings, is that how it supposed to work?
[16:47:37] <cheeser> (i'm probably leaving in a few minutes to forage for food. if I don't respond, that's probably why.)
[16:50:24] <b0bi> This is the find I issued: {'$and': [{'tcp.port.__data.show': {'$gt': '100000'}}]}, I know that the 'and' is redundant, but its autogenerated.
[16:51:32] <b0bi> I get results with the 'show' field under that document with values of '200' for example.
[16:53:51] <cheeser> strings are lexicographically compared. '2' > '1' so the rest of the string is irrelevant
[17:01:32] <crised> How would you implement *triggers*?
[17:04:42] <crised> "When you run MongoDB in a Replica Set, all of the MongoDB actions are logged to an operations log (known as the oplog). The oplog is basically just a running list of the modifications made to the data. Replicas Sets function by listening to changes on this oplog and then applying the changes locally."
[17:06:13] <crised> Do I need replication in order to have a trigger event so node app can listen to?
[17:07:11] <crised> Do I need replica to use tailable cursors?
[17:10:34] <bazineta> No, you just need a capped collection. http://docs.mongodb.org/manual/tutorial/create-tailable-cursor/
[17:11:13] <crised> bazineta: Do you guys regularly used the mongod in a dedicated server? or use a service like mongolab?
[17:12:38] <bazineta> crised We've only ever used our own, either locally hosted or in AWS. No experience with services like mongolab, but perhaps others have more insight.
[17:13:03] <crised> bazineta: the one that you have in aws, is a dedicated AMI?
[17:14:40] <bazineta> crised Yes, we only run mongod on the Mongo servers, so those AMIs do nothing but that.
[17:15:19] <crised> bazineta: something like this? https://aws.amazon.com/marketplace/pp/B00CO7AVMY/ref=sp_mpg_product_title/191-3178984-5244527?ie=UTF8&sr=0-2
[17:15:48] <crised> bazineta: Don't you use micro instance for this purpose?
[17:16:54] <bazineta> crised We use that image but delete the fixed IOPS volumes and instead use the new burstable SSD ones. Our data volumes are such that a micro instance wouldn't work, but for a small dev instance it'd probably be fine.
[17:17:32] <crised> bazineta: Why didn't you go for hosted MongoDB?
[17:18:33] <bazineta> crised In our case we have a lot of other instances needed, i.e., front ends, etc., so AMI made the most sense since it gives flexibility and predicatable performance. Clustering in AWS is also very simple.
[17:21:29] <crised> bazineta: I might perhaps just run a micro instance, and just do yum install mongo, and that's it... no tweaking no nothing, how does this sound?
[17:22:25] <bazineta> crised For dev that'll be fine. For production, you'd want to look at the recommended tuning and volume layout, but until then sure, that should work.
[17:23:31] <crised> bazineta: mmm MongoLab shared does look nice https://mongolab.com/plans/
[17:25:38] <bazineta> crised Yes, for what you get there, that's in my opinion a fair price, especially on the cluster.
[17:26:06] <crised> bazineta: so you get a cluster, that in mongodb terms, means Replica?
[17:26:56] <bazineta> crised Yes. Typically a 3-node setup, can be more, but that's the most common. Allows for easy upgrades with little to no downtime and very high availability.
[17:53:41] <gphummer> Hi, I’m working on my very first open source project. I’m currently using mongo with rails with the mongoid ODM. We are allowing individuals to upload a CSV file with an indeterminate number of columns. We’re looking to store the CSVs in our mongo database as a single large document with embedded sub-documents representing the rows of the CSV file. Is it possible to do this without running over the 16 MB document limit? Or sh
[17:53:42] <gphummer> we take a ‘bucketing’ approach?
[18:28:06] <adrian_lc> hi, is there an alternative version of the packages for Ubuntu, without services being started automatically on post install?
[19:44:18] <jiffe> "errmsg" : "exception: DBClientBase::findN: transport error
[19:45:06] <jiffe> none of our monitoring gear has detected any network issues
[19:48:23] <jiffe> I am trying to shard a very large collection that is taking 4 days to do so and it usually quits 2-3 days in with this error now
[19:48:53] <jiffe> mongodb really needs to just retry in this case rather than fail
[19:53:10] <jiffe> hmm, it looks like it did enable sharding on the collection this time though
[20:07:41] <crised> Are Capped Collections stored in RAM?
[20:36:54] <crised> cheeser: asking compose.io guys abut the maximum size of capped collection for the 1 GB collection, they seem not to be able to tell me
[20:37:27] <crised> cheeser: capped collection, seem not so convienient, because deleting individually is not allowed.
[20:38:04] <cheeser> capped collections have a very targeted use case
[20:38:49] <crised> cheeser: need the 'trigger' functionality, and guaranteed order is nice also
[20:39:03] <cheeser> what "trigger" functionality?
[20:40:20] <crised> cheeser: "As new documents are inserted into the capped collection, you can use the tailable cursor to continue retrieving documents."
[20:40:52] <crised> cheeser: "What you are thinking of sounds a lot like triggers. MongoDB does not have any support for triggers, however some people have "rolled their own" using some tricks. The key here is the oplog."
[20:41:39] <cheeser> that's nothing to do with triggers.
[20:43:05] <crised> cheeser: I think it has something to be with relational database triggers
[20:43:15] <crised> cheeser: Thaat notifies when new data is added, http://jpaljasma.blogspot.com/2013/11/howto-mongodb-tailable-cursors-in-nodejs.html
[21:40:06] <hicker> cheese: In mongoose (I suppose this is the wrong channel for that), say I have { customer: { contact: { _id: ... } } } I want to populate contact and query it. Is that even possible?
[21:40:25] <cheeser> you can query on subdocuments, yes.
[21:40:52] <hicker> Is it a subdocument though? I thought it was a reference.
[21:41:06] <hicker> Because it's just an _id before population
[21:41:26] <hicker> Is it a subdocument once it's populated?
[21:41:31] <cheeser> well, in the shell that's a subdocument because the contact document is inside the customer doc. not sure how mongoose represents that.
[23:40:51] <cheeser> i think mongodump takes a lock. but try it and see.
[23:42:21] <Nilium> I think it should only take one if it's given either a dbpath or you're using fsync/Lock, but yeah, going to need to test this, ultimately.
[23:51:57] <gphummer> Hi, I’m working on my very first open source project. I’m currently using mongo with rails with the mongoid ODM. We are allowing individuals to upload a CSV file with an indeterminate number of columns. We’re looking to store the CSVs in our mongo database as a single large document with embedded sub-documents representing the rows of the CSV file. Is it possible to do this without running over the 16 MB document limit? Or sh
[23:51:58] <gphummer> we take a ‘bucketing’ approach?