[00:58:30] <adoming> OK so to reiterate, I am doing a query on this schema http://pastebin.com/TnLm4syg and I am referencing other schemas which in turn are referencing another schema by objectId. My question is how I can db.colleciton.find() data from a referenced, referenced schema (sorry I'm a noob if this sounds dumb).
[01:14:43] <morenoh149> anyone here doing the mongodb for node devs mooc?
[01:15:33] <joannac> adoming: you are asking a mongoose question.
[01:15:52] <adoming> joannac: ok i'll take it over there
[01:22:24] <morenoh149> what does findOne return if no document matches? http://docs.mongodb.org/manual/reference/method/db.collection.findOne/
[01:37:45] <morenoh149> `Cannot use a writeConcern without a callback` what do?
[01:40:03] <morenoh149> dammit it differs http://mongodb.github.io/node-mongodb-native/2.0/api-docs/
[02:59:01] <morenoh149> with the node driver. How can I aquire a write lock?
[03:02:04] <morenoh149> 3 operations give a write lock http://docs.mongodb.org/manual/faq/concurrency/#which-operations-lock-the-database
[05:28:58] <Climax777> hi all. philosophical question: I get what mongodb is good for and not good for. but what can one say about mysql? what is it good for and not good for?
[05:39:21] <arussel> I have document {a: 1, b: 2, c:3}, if I add an index on a,b will it be used of query on abc ?
[07:18:22] <x44x45x41x4E> Hi, I've got a question. I'm backing up a MongoDB instance that is on production using mongodump, will my dump be more reliable if use oplog option? Thanks. http://docs.mongodb.org/manual/tutorial/backup-with-mongodump/#point-in-time-operation-using-oplogs
[07:22:19] <Viesti> hum, thinking of representing a tree structure in mongo as just one document, having operations to add and delete nodes
[07:22:32] <Viesti> finding out that $pull doesn't support rmeove by array index
[09:03:19] <Yogurt> Yeah. (: Iam here too. I mean for help. And i asked so many people. So, noone answer. Btw iam hanging around for chat. Coz, it's been over 20 years to chat with mIRC in my life.
[09:17:55] <Yogurt> So, can i ask you question too ?
[09:53:13] <Tomasso> hello, I read documents from a collection, I processed and modify them, I also remove the _id field, and insert it into another collection, I get DUPLICATE KEY. I do the same, but insert in a collection from a different database, and I get the same dupplicate key... I'm lost on this. I read there was a bug on smth like that, I use mongo 2.6.1
[09:54:17] <Tomasso> also i don't understand since I removed _id field.. I print the doc to stdout and _id is not there.
[10:55:59] <aaearon> what are others using in regards to python orm-like layors for mongodb
[10:57:53] <Yogurt> aaearon sorry bro. but oneone is here for help. you have to believe me. i tried so much (:
[10:58:30] <aaearon> nah i was helped here yesterday, you just have to be patient :)
[11:34:54] <temhaa> Also I have another question. in mongo server there are some problem. mongo slave output: http://picpaste.com/mongoslave-7fl71XCc.png
[11:35:29] <temhaa> what is reason has a 747hour lock I guess
[12:11:52] <temhaa> joannac: Actually our problem is master log very big but slave logs very small. I need check master slave repl. is true.
[12:53:08] <Tomasso> after modifying an object and inserting it to another collection, i get Duplicate key. I tried to delete the _id entry, and same error... I tried ticket["_id"] = BSON::ObjectId.new and again same error.... i dont know what to do
[12:55:06] <StephenLynx> try printing the object you are inserting before inserting it.
[12:58:51] <Tomasso> im generating the _id by code, this is what it prints now {"_id"=>BSON::ObjectId('54be4d05c783c41d5c000001') ..........
[13:00:51] <StephenLynx> are you printin the error message? it says the duplicate field.
[13:01:02] <Tomasso> in both cases, with _id, and without _id
[13:02:54] <StephenLynx> I think you are messing up something and not actually removing the id. theres no way it would says it has a duplicate field when the object does not even contains such fiel
[15:21:30] <kexmex> why would they target specific persons
[15:22:01] <kexmex> all that encryption stuff is there, but it's just strange that it is not used
[15:22:15] <StephenLynx> ok, now what would be the costs for them to target every single person and not get caught so everyone would become aware of it?
[15:22:33] <StephenLynx> I understand it would be easy for them to place
[17:04:40] <cheeser> as in, i run a single node in prod.
[17:04:56] <cheeser> my needs aren't that big, though, on this one.
[17:23:57] <neo_44> cheeser: one of first things I look at in any architecture is fault tolerance.....with single node you have to fail over, so you have a single point of failure
[17:24:19] <neo_44> you also can't do maintenance to mongo without down time in single mode.
[17:24:57] <neo_44> and from client side a single node is the same as a replica set...so there are no code changes....just connection string needs to be updated to have seed list
[17:57:22] <jayjo> I'm using mongoengine to interact with mongo, is it possible to return an object in stead of a queryset when I query the database?
[18:22:07] <imachuchu> so I need to implement a changelog functionality to an existing collection, tracking only modifications to individual documents. I'm thinking of doing it by appending a subarray on each document that stores a subdocument for each change (so who, when, what, old value, and new value). Since I'm a bit new to MongoDB I'm wondering does this sound good or is there a brighter way?
[18:23:33] <neo_44> jayjo...yes by default it returns a cursor
[18:23:47] <neo_44> but if you specify in the find method you can get the object
[18:25:21] <neo_44> jayjo: Client.objects.get(__raw__={"_id": oid}) this will return single document
[18:27:27] <neo_44> imachuchu: why do you need to track all deltas?
[18:27:59] <neo_44> imachuchu: could do you need to know what changes or just that something changed? ... is this to roll back a change?
[18:34:13] <GothAlice> imachuchu: If one needs to track document changes over time, one could watch the oplog the same way MongoDB replication does, making note of updates across the desired collection. Then, to track additional information (beyond the literal query used to make the change) you can use $comment within the query to pass additional data through. I do this to pass the User ID who requested that update.
[18:35:26] <GothAlice> https://github.com/cayasso/mongo-oplog is an example library to provide this type of low-level oplog functionality.
[18:36:38] <neo_44> Depending on the use case I agree with GothAlice....this would be great for auditing if needed
[18:37:08] <imachuchu> neo_44: it's so that a manager can audit to see what all has been changed in a record and by what employee (essentually)
[18:37:31] <GothAlice> Bam, imachuchu: mongo-oplog will be your friend. :)
[18:37:36] <imachuchu> GothAlice: that's almost exactly why I figured I should ask here, let me take a look and see if that's closer to what I want/need
[18:37:51] <imachuchu> GothAlice: I'll go take a look at that too, thanks!
[18:39:22] <neo_44> imachuchu: I would add the $comment as GothAlice suggested on all updates to the record
[18:39:38] <neo_44> that will give you WHO....that is what I have done in the past
[18:40:03] <neo_44> imachuchu: if you use a service layer it could get tricky because the WHO would be the service layer not the person...FYI
[18:42:45] <neo_44> imachuchu: you could also use a seperate capped collection to write the old document and who changed it....this would clean it self up over time but would give you insight for a certain time period
[18:47:46] <imachuchu> neo_44: that was coming up on my next question, is there any reason this shouldn't be stored inside a seperate collecion inside MongoDB? While there shouldn't be too many changes (>100, most likely >50) per record I'm still a firm believer in preparing for the worst
[18:48:52] <saml> {"x": {}} how can I query for docs where x has at least one key? such as {"x":{"y":1}}
[18:49:07] <imachuchu> the "who" will be a bit more difficult, since it's who in the application changed the record, not who in MongoDB, but I'll look into implementing mongo-oplog and see what I can find
[18:49:34] <neo_44> imachuchu: are the changes atomtic and replace the entire document? or a lot of little updates?
[18:50:24] <neo_44> imachuchu: I would use seperate collection and bucket all the changes so 1 query would return all changes for that doc over time
[18:50:31] <imachuchu> neo_44: since I'm writing the frontend that's doing the changes right now, whatever's best. I believe they are little changes but I can make them all one (I'm using Mongoengine and haven't really checked)
[18:51:31] <neo_44> if you use mongoengine and just save the object (object.save()) it will replace the entire document....so that is multiple chagnes at once....then you could just overwrite the .save() to copy the document to the new collection first
[18:51:39] <neo_44> then save it....you will have both the old and new everytime
[18:52:09] <neo_44> I use mongoengine but have wrapped it in a data access layer to abstract away the mongoengine piece for the UI/API guys that use my DAL.
[18:54:49] <imachuchu> neo_44: hmm... I see. I'll need to look into what I can get because storing a full copy on every update to the capped collection seems wasteful
[18:54:49] <saml> then i can go from there via aggregation framework
[18:55:33] <saml> how do i aggregate each key of x?
[18:56:07] <neo_44> imachuchu: what seems wasteful? duplicating the data?
[18:56:07] <saml> db.asdf.aggregate({$match:{x:{$ne:{}}}}, .... want to get count for each key x.$key
[18:57:40] <imachuchu> neo_44: duplicating the whole document on every commit when I'm only looking at storing the deltas
[18:58:27] <neo_44> imachuchu: with a capped collection there is no fragmentation and how big are the docs really? It would be easier to just store a copy then try to only store the deltas... .
[18:58:41] <neo_44> just depends on the complexity you want in the code vs easy of use
[18:58:59] <saml> i think i need to split {x:{a:1,b:1}} into {'x.a':1} and {'x.b':1} and then group
[18:59:25] <neo_44> saml: you should use elastic search for aggregation
[19:00:12] <neo_44> if you just need to agg something it is fast..
[19:00:46] <imachuchu> neo_44: true, and the documents aren't that big, so it's not too big of a deal. I'll go implement something and come back here with any questions. Thank you for all of your help
[22:01:27] <FunnyLookinHat> I have a collection of "particles" - and each particle belongs to a beam ( one beam can have many particles ). https://github.com/funnylookinhat/tetryon/blob/master/spec/particles.txt
[22:01:44] <FunnyLookinHat> I chose to embed the beams inside the particles because it will make generating reports and whatnot far faster
[22:02:03] <FunnyLookinHat> But I'm seeing an issue - I want the beam attached to each particle to always be the same -
[22:02:43] <FunnyLookinHat> The only way I can think to do that is, when I create a new particle, read another with the same beam.id and then use that beam information in creating the new particle
[22:02:51] <FunnyLookinHat> Is that... the "mongo" way to do it?
[22:13:04] <kexmex> so like, due to non existance of a signed mongodb rpm for centOS, i am trying to build my own. How do I build rpm, after building from source?
[22:13:11] <kexmex> i tried to use the spec in rpm folder
[22:17:31] <francescu> saml: and why not checking the $type? if x is Object and ne to {}
[22:18:15] <francescu> saml: oops sorry my history was too high