[02:21:57] <girvo> hi all, quick question. using the node.js api for mongo, is it possible to get the _id of the object through the "err" callback without needing to `JSON.parse(err)' and pull it out that way?
[02:27:46] <girvo> basically, check this snippet to see what I have to do. Is there a nicer way?
[02:28:34] <girvo> Line 5-8 being the key ugly hack
[04:59:23] <Jadenn> mm. so i tried to migrate mongo to a new server, however the users didn't seem to make it over. is there a hidden users file somewheres?
[05:06:43] <Jadenn> yeah. i copied the data folder over, and i can't connect using the same username and password as on my old server
[06:59:07] <narutimateum> can i used mongodb like in mysql?
[06:59:16] <narutimateum> can i use* mongo like in mysql?
[07:00:00] <joannac> mongodb is a database just like mysql
[07:00:48] <joannac> however you'll need to learn how to use it, it doesn't have the same syntax for insert/delete/update/select etc
[07:00:55] <narutimateum> so i can still make normalized collection
[07:01:07] <narutimateum> i have ORM to handle the syntaxes
[07:01:14] <joannac> if you want; i don't recommend it though
[07:02:18] <Gargoyle> narutimateum: If you want normalisation and ORM, why are you looking thinking about using Mongo?
[07:02:50] <narutimateum> because data for products is varied and mongo is perfect for the schemaless
[07:03:42] <narutimateum> okay this is my product collection http://i.imgur.com/JbtW591.png and this is store colection http://i.imgur.com/Yf9sJyJ.png is my structure is alright??... but how do i get store info if i cant join
[07:05:06] <Gargoyle> narutimateum: do two queries in your code.
[07:07:39] <narutimateum> hmm... so the structure okay?
[07:19:05] <dsirijus> so, i've created collections and documents and so on...
[07:19:18] <Repox> Hi guys. Need some input. I'm trying to setup a mongodb as slave. Currently, I'm getting this in the log: http://pastie.org/private/ftkk8cxtuptrymfrdbb7oa - how can I fix this?
[07:19:31] <dsirijus> is there a common practice where the document i'm sending to client from mongo gets all ObjectIDs replaced with actual documents from other collections?
[07:19:42] <joannac> why are you using master-slace and not replica sets?
[07:20:00] <Repox> joannac, because the servers are not on the same server.
[07:20:33] <Repox> joannac, and primarely because I can't make the replica sets work at all... been trying for the past 24 hours...
[07:20:44] <dsirijus> say i have something like {_id:whatever, something:ObjectID<1>, else:ObjectID<2>}. i'd like those ObjectIDs get replaced with their respective documents from some other collections
[07:20:51] <dsirijus> what's the common practice here?
[07:21:14] <Repox> joannac, you can see what I've tried with replica sets here: http://dba.stackexchange.com/questions/66410/creating-replicas-accross-servers
[07:22:02] <dsirijus> joannac: i presumed so, but is that something handled in mongo or i need to parse manually every query and fetch documents via queries myself and assemble the json for client that way?
[07:28:37] <joannac> rs.initiate() is to initiate one node in your new replica set
[07:29:47] <Repox> joannac, okay. Great. I added the new mongo instance on server A, and I'm getting this message when doing rs.status() on server a: http://pastie.org/private/ovqvrohuj0pbpubvxb2dg
[07:31:12] <Repox> joannac, and the mongo log file on server A says "replSet total number of votes is even - add arbiter or give one member an extra vote"
[07:31:23] <joannac> yeah, don't have an even number
[07:31:34] <joannac> because your elections will fail
[07:31:42] <joannac> you can deal with that in a bit
[07:47:52] <Repox> joannac, it seems that everything is done now. Trying to do a count on server B gave me this: "2014-06-03T07:44:27.046+0000 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:191"
[08:03:05] <Repox> Gargoyle, reading http://docs.mongodb.org/manual/reference/method/rs.slaveOk/ (please correct me if I'm wrong) I should be able to use rs.slaveOk() on the primary replica to allow read operations on other replicas during my connection?
[09:12:55] <Nomikos> Hello all. I have a subdocument/object with 5 string keys and numeric values, is there a way to fetch all documents where the sum of these values > 0?
[09:13:24] <Nomikos> In other words, find all documents where any of the subdocument values are 1+?
[09:14:12] <Nomikos> Gargoyle: hm, but the array will get more keys in the future
[09:15:04] <Gargoyle> Nomikos: Caculate the sum when you update the doc and store it in another field.
[09:27:24] <Gargoyle> Nomikos: Another option would be to store them as name/value pairs. Eg. doc.sub = [{name: foo, val: bar},{name: baz, val: morebar}]
[09:27:57] <Gargoyle> You should then be able to do doc.sub.val: {$gt: 1} type of query.
[09:28:08] <Nomikos> And then I could do doc.sub.val: $gt: 1
[09:28:23] <Gargoyle> If you need to be specific, then $elemMatch can help
[09:29:47] <Garo_> iggyZiggy: you should just ask your question, instead of asking if anybody can help
[09:32:17] <iggyZiggy> i thought it was, i'm using mongoosejs and calling populate method with goal to list all ref'ed objects, problem is... i'm getting empty array
[09:43:20] <anchnk> Hi, does anyone knows if it's possible to build or install mongodb on a Synology NAS (cedarview arch) ?
[10:03:16] <iggyZiggy> kali, do you use mongoosejs?
[10:07:41] <iggyZiggy> dsirijus is my boss, we're wondering if this is doable, having an array with multiple references to models, example: platforms:[{type: ObjectId, ref:"facebook"},{type: ObjectId, ref:"notFacebook"},... ] so it can be used with populate method, like so: ...populate(["platforms",... ?
[10:11:50] <netQt> Hi all. How do I use group by when using find() on collection?
[10:12:50] <netQt> here is my query collection.find {userId: {$in: ids}}
[10:18:52] <rspijker> netQt: use aggregation? $match and $group
[10:19:30] <netQt> can I do select all in aggregation?
[10:20:00] <netQt> I'm trying and it returns the field I'm grouping by
[10:24:30] <tobyai> Hi there! I recently updated to mongodb 2.6 and I am seeing an unusual slow connection setup (500ms, localhost). I am using mongoose 3.8.8 with mongodb 2.6.1
[10:24:30] <tobyai> . Any ideas what could cause this?
[12:05:53] <dsirijus> kali: you have some specific hate for mongoose?
[12:06:15] <dsirijus> we're not really experienced with dbs in general, it looks like something that gets us up on the right track
[12:07:55] <Mhretab> Hi, I have problem with dates on mongodb, when I insert using new Date() the month is one more than I specified and the dates are actually one less. So for example when I insert 2014,4,2 it shows as 2014,5,1
[12:08:19] <Mhretab> does any one has suggestion on how to fix this?
[12:10:05] <ehershey> hm, how are you creating /viewing your dates?
[12:10:17] <ehershey> is it a problem related to javascript date objects using zero-indexed months?
[12:10:33] <ehershey> i.e. for a current date, dateobj.getMonth() == 5
[12:10:56] <algernon> pay attention to the part where it says that months start at 0
[12:10:59] <kali> dsirijus: my problem is specifically with javascript, actually. as for mongoose, i think it is not a good idea to begin using mongodb (or any database) by using a ODM/ORM layer
[12:12:02] <dsirijus> kali, i agree. but we're on a bit of a deadline and, not knowing mongo's best practices, we hoped some of them are baked into mongoose
[12:12:53] <kali> dsirijus: to be fair, the paradign glitch between mongodb and mongoose is such that with all my good will, i can not understand your question :)
[13:09:47] <narutimateum> can i put unique in embedded document??
[13:22:35] <michaelchum> Hi, I wrote a script to insert 60 million documents into one collection, it runs fluidly until 2 million docs, and then pauses every 5min, thereafter stops, and the log says Mon Jun 2 18:27:45 [clientcursormon] mem (MB) res:310 virt:13046 mapped:6094
[13:31:02] <michaelchum> Oh ok but I did not specify _id thus I believe, mongoDB automatically adds _id and indexes it right?
[13:31:38] <NodeJS> cheeser: I mean I need to make there two command via single query http://stackoverflow.com/questions/24009553/how-to-query-and-update-document-by-using-single-query
[13:43:58] <NodeJS> cheeser: did you my post on SO http://stackoverflow.com/questions/24009553/how-to-query-and-update-document-by-using-single-query ?
[15:46:31] <jdrm00> quick question ...about objectid validation that occurs in some drivers (single String of 12 bytes or a string of 24 hex characters)...is that restriction something mongo related ...a good practice...or something I must comply in order to avoid problems?
[15:53:53] <rspijker> jdrm00: an _id can be *anything*… an ObjectId is 12 binary bytes, or a 24 byte hex string
[15:57:25] <rspijker> if you want to construct a specific ObjectId based on a string, it needs to be a 24 character hex string
[15:57:34] <rspijker> otherwise the constructor will simply throw an exception
[16:03:30] <jdrm00> and in general terms when I should be using and ObjectId and not just any string as _id?
[16:06:11] <cheeser> personally,i think unless you have a compelling reason to use a specific type for an ID, you should jsut use ObjectID
[16:08:37] <josias> Hi, am I right here for morphia questions?
[16:09:50] <josias> there is a stackoverflow question that discribes my problem but it is unanswered: http://stackoverflow.com/questions/18791045/mongodb-elemmatch-subdocuments
[16:23:00] <josias> no morphia-expert nobody here?
[16:37:05] <josias> ok I try to break the problem down to MongoDB: Why can't I use subdocuments in $elemMatch? e.g. db.Journey.find({ "stops" : { "$elemMatch" : { "point" : { "key" : "Master:99651"},"time" : 1234}}}) that doesn't work if the subdocument has more than one Element
[16:39:31] <og01> Hello, I want to aggregate and group by multiple fields, can I use some sort of composite key in the _id?
[18:00:24] <gancl> collection.findOne(.., function(err, doc) Can I collection.findOne again in the result "doc"?
[18:00:41] <Shapeshifter> Hi. If I have 3 documents like this: {r: 1, i: 1, d: "foo"}, {r: 2, i: 1, d: "bar"}, {r: 1, i: 2, d: "baz"}, what kind of query can I use to retrieve those documents, which, among the other documents with the same i property, have the highest r property? In this example the 2nd and 3rd document would match, because even though there's another document with i: 1, it's r is lower than the other document with i: 1.
[18:09:00] <LesTR> nope, return all documents which has i = 1, sorted by r
[18:09:18] <LesTR> but be carefull about indexes and db size : )
[18:09:45] <LesTR> sort may be very expensive operation
[18:11:11] <Shapeshifter> mhh. but that's not really what I meant. Imagine... a car maker which has different models ("i"). For every model it has different releases/versions ("r"). One document contains information about one version of one car. I would like to retrieve those documents containing data about the newest release of every model (but not multiple documents on the same model).
[18:11:38] <Shapeshifter> so: "any i, but highest r for any i"
[18:12:38] <Shapeshifter> e.g. given combinations i1 i2 i3 i4 a1 a2 b1 b2 b3, the query should return i4, a2 and b3
[18:19:26] <Shapeshifter> og01: not really... I'm currently thinking about how to model the problem and if I can effectively query the data afterwards. The problem is this: For a very large number of "nodes", I need to store some data. The whole graph of nodes goes through many cycles but in every cycle, the data of only a few nodes changes. So if I were to store the data for every node for every cycle, I would have a huge amount of duplicate data. Instead, ...
[18:19:32] <Shapeshifter> ... I think I could store the data only when it changes. But this means that in order to answer the query "give me the data for all nodes at cylce X", I need to be able to effectively determine which data is the newest for a given node.
[18:20:24] <Shapeshifter> The alternative would simply be to store the data separately but still write a tiny document containing only the node id, revision id and data id - so this would still be duplicated many times (with different revision id) but the query would be trivial
[18:20:57] <Shapeshifter> but for thousands of revisions of millions of nodes this can still amount to gigabytes of data
[18:21:34] <Shapeshifter> although finally I thought I could do the first - *but* keep a fringe of newest data. so the newest state can be retrieved immediately and then I go back from there if required.
[18:21:40] <Shapeshifter> that would be a nice compromise
[18:22:07] <og01> Shapeshifter: this isnt what you mean is it? http://pastebin.com/tW5AP0JK
[18:56:11] <SeptDePique> hello all. one stupid question. please answer: is it possible to execute complex queries (like in SQL) on a MongoDB or are there any restrictions?
[18:57:18] <og01> SeptDePique: complex is subjective, but yes you could create quite a large and mind bending query in MongoDB
[18:58:08] <og01> SeptDePique: I'm still new, to mongodb, and i would say the first "limitation" i came across was that there is no Join equivelent
[18:59:11] <og01> but thats not to say that MongoDB isnt powerfull, it has a great aggregation/mapReduce pipeline that is very usefull
[18:59:42] <og01> and document based records with nested data can work very well and simplify some situations where joins would be needed in sql
[19:00:01] <SeptDePique> og01: ok... does that also imply that you can't use something like "SELECT t.x, s.y FROM t, s WHERE t = s"??
[19:08:02] <og01> performance wise it has been great though, not that my datasets are particularly large, its working great
[19:08:46] <og01> do you have any other specific questions?
[19:09:20] <SeptDePique> Actually I think about using MySQL, Postgre or MongoDB. I don't know. The only thing that makes me hesitate is that I want to make queries over multiple table with specific restrictions. don't know if Mongo will make trouble
[19:10:29] <og01> the thing you might want to consider is that ultimatly your data will be nested, and that you would need less tables
[19:16:13] <og01> feels bad from a SQL background to do that, but it actually works ok, and in the majority of cases, the quering and aggregation ability of nested data in mongodb is so powerfull and quick its actually really and easily a good trade off for not having joins
[19:18:03] <michaelchum> Does anyone know, in python with PyMongo, if I do db.insert(mydocument), will it directly insert into MongoDB and wait until it is inserted before continuing my python code or it's all pushed to Mongo with a buffer?
[19:21:06] <og01> mongonoob: if your certain, then i guess that you can just store the data in any field without special consideration
[19:21:24] <mongonoob> mongonoob: i see. i don't need to encode it or anything?
[19:21:39] <mongonoob> og01: lol. i see. i don't need to encode it or anything?
[19:22:03] <og01> mongonoob: (disclaimer: im new to mongodb), i dont see why you would need to encode the data
[19:22:50] <mongonoob> og01: im new too :). im using pymongo. i have a file type. so im assuming i can just use the insert function to insert the file as is? ill try it out and see what happens
[19:22:50] <og01> mongonoob: as it is encoded as bson by mongo itself, this isnt some interpreted query language
[19:23:11] <mongonoob> og01: ohhh, mongo encodes it with bson automatically?
[19:23:50] <og01> well thats how its stored in the database, but my point is that there isnt an interpreted query language that you need to worry about like you have with SQL drivers
[19:24:42] <og01> mongonoob: i write perl and nodejs, and i dont really know python, but you'll need to read the data into a variable, you can't pass it a filehandle for instance
[19:34:12] <gancl> Hi! native-Mongodb node.js how to findAndModify insert new datas to sub document? http://stackoverflow.com/questions/24017372/nodejs-mongodb-findandmodify-exception-e11000-duplicate-key-error-index
[20:56:10] <cheeser> 2014-06-03T16:54:30.399-0400 SEVERE: Failed global initialization: BadValue The "verbose" option string cannot contain any characters other than "v"
[21:05:06] <__ls> Could someone explain to me how it happens that for a read-only database (I disabled all writes to any database temporarily to test just-read performance), in which there is a decent number of connections (~1000), lots of queries are slow or time out (taking about 1 sec) while apparently waiting for a read lock? I thought read locks could be shared among connections? Is there a limit on how...
[21:05:08] <__ls> ...many connections (or queries?) can share a read lock?
[21:11:04] <__ls> the end of a log line for one of these queries looks usually like this: "ntoreturn:5 ntoskip:0 nscanned:1592 nscannedObjects:0 keyUpdates:0 numYields:0 locks(micros) r:930424 nreturned:0 reslen:20 930ms"
[21:14:43] <__ls> as far as i can tell from mms, all queries are covered by indexes
[21:16:07] <Derick> it still had to scan 1592 documents...
[21:17:16] <timing> I want my full text search index to allow searching including stopwords. Is there a way to regenerate the index with the stopwords defined here: https://github.com/mongodb/mongo/blob/master/src/mongo/db/fts/stop_words_english.txt
[21:19:57] <__ls> Derick: wouldn't nscanned=1592 mean that 1592 index entries were scanned? These are $in queries as well as sorted, so I wasn't sure if it was reasonable to try and bring these numbers closer to 0
[21:20:14] <__ls> also, if each individual query were slow, I wouldn't be seeing this behavior only under high loads?
[21:20:19] <timing> Derick: I really don't get the words chosen in the lists
[21:22:42] <Derick> timing: yeah, I know. So vote for that jira ticket
[21:22:43] <__ls> Aside from the specifics, I'm just hung up on this conceptual issue that I'm having ... if read locks are shared, and there are no writes happening, why are queries still waiting for read locks?
[21:23:11] <__ls> I think I'm just not getting some additional constraint here, and I haven't found an answer although I've been googling myself to death ...
[21:23:37] <__ls> Derick: ok, I'll just hang around, maybe someone else knows
[22:22:17] <joneshf-laptop> having some issues with casbah and android interop
[22:22:47] <joneshf-laptop> i can set up a connection to mongolab from a terminal and query the db there no problem
[22:23:08] <joneshf-laptop> but when i try to do similar in the app it dies wanting a slf4j logger
[22:23:18] <joneshf-laptop> any ideas what to try here?
[22:25:12] <joneshf-laptop> actually scratch that, i get a warning on the terminal (scala)
[22:25:17] <joneshf-laptop> saying it'll default to noop
[22:25:57] <joneshf-laptop> but in android it's: java.lang.NoClassDefFoundError: org.slf4j.impl.StaticLoggerBinder
[22:27:27] <joneshf-laptop> or maybe just a pointer as to how to set up logging
[22:27:40] <joneshf-laptop> i don't might that, just wnat to make forward progress
[22:46:01] <proteneer_osx> why is your node driver such a horrendous piece of shit