PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 3rd of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:21:57] <girvo> hi all, quick question. using the node.js api for mongo, is it possible to get the _id of the object through the "err" callback without needing to `JSON.parse(err)' and pull it out that way?
[02:27:46] <girvo> basically, check this snippet to see what I have to do. Is there a nicer way?
[02:27:54] <girvo> http://paste.stacka.to/guqumisaju.js
[02:28:34] <girvo> Line 5-8 being the key ugly hack
[04:59:23] <Jadenn> mm. so i tried to migrate mongo to a new server, however the users didn't seem to make it over. is there a hidden users file somewheres?
[05:06:43] <Jadenn> yeah. i copied the data folder over, and i can't connect using the same username and password as on my old server
[06:59:07] <narutimateum> can i used mongodb like in mysql?
[06:59:16] <narutimateum> can i use* mongo like in mysql?
[06:59:43] <joannac> depends.
[07:00:00] <joannac> mongodb is a database just like mysql
[07:00:48] <joannac> however you'll need to learn how to use it, it doesn't have the same syntax for insert/delete/update/select etc
[07:00:55] <narutimateum> so i can still make normalized collection
[07:01:07] <narutimateum> i have ORM to handle the syntaxes
[07:01:14] <joannac> if you want; i don't recommend it though
[07:02:18] <Gargoyle> narutimateum: If you want normalisation and ORM, why are you looking thinking about using Mongo?
[07:02:50] <narutimateum> because data for products is varied and mongo is perfect for the schemaless
[07:03:42] <narutimateum> okay this is my product collection http://i.imgur.com/JbtW591.png and this is store colection http://i.imgur.com/Yf9sJyJ.png is my structure is alright??... but how do i get store info if i cant join
[07:05:06] <Gargoyle> narutimateum: do two queries in your code.
[07:07:39] <narutimateum> hmm... so the structure okay?
[07:19:05] <dsirijus> so, i've created collections and documents and so on...
[07:19:18] <Repox> Hi guys. Need some input. I'm trying to setup a mongodb as slave. Currently, I'm getting this in the log: http://pastie.org/private/ftkk8cxtuptrymfrdbb7oa - how can I fix this?
[07:19:31] <dsirijus> is there a common practice where the document i'm sending to client from mongo gets all ObjectIDs replaced with actual documents from other collections?
[07:19:42] <joannac> why are you using master-slace and not replica sets?
[07:20:00] <Repox> joannac, because the servers are not on the same server.
[07:20:21] <joannac> Repox: ?
[07:20:33] <Repox> joannac, and primarely because I can't make the replica sets work at all... been trying for the past 24 hours...
[07:20:44] <dsirijus> say i have something like {_id:whatever, something:ObjectID<1>, else:ObjectID<2>}. i'd like those ObjectIDs get replaced with their respective documents from some other collections
[07:20:51] <dsirijus> what's the common practice here?
[07:20:56] <joannac> dsirijus: multiple queries
[07:21:14] <Repox> joannac, you can see what I've tried with replica sets here: http://dba.stackexchange.com/questions/66410/creating-replicas-accross-servers
[07:21:46] <joannac> Repox: umm
[07:22:01] <joannac> incorrect
[07:22:02] <dsirijus> joannac: i presumed so, but is that something handled in mongo or i need to parse manually every query and fetch documents via queries myself and assemble the json for client that way?
[07:22:05] <joannac> which server has data?
[07:22:12] <Repox> joannac, server a.
[07:22:17] <joannac> okay
[07:22:25] <joannac> I'll call the other one server B
[07:22:31] <joannac> shut down mongod on server B
[07:22:37] <joannac> remove the entire data directory
[07:22:59] <joannac> start up server B with the same --replSet rs0 option
[07:23:12] <joannac> then on server A, rs.add("serverB:port")
[07:23:38] <dsirijus> joannac: is http://docs.mongodb.org/manual/aggregation/ something related to this?
[07:23:50] <joannac> dsirijus: you have to do it yourself. I think there are layers on top of mongo that will do it for you too
[07:23:58] <joannac> and no, aggregation is something different
[07:24:55] <dsirijus> joannac: so, this? http://mongoosejs.com/docs/3.4.x/docs/populate.html
[07:25:02] <dsirijus> (in example)
[07:25:30] <joannac> yeah, that's what I was thinking of
[07:28:05] <Repox> joannac, maybe a silly question, but shouldn't I rs.initiate() on server B?
[07:28:23] <joannac> Repox: no
[07:28:33] <dsirijus> joannac: thank you.
[07:28:37] <joannac> rs.initiate() is to initiate one node in your new replica set
[07:29:47] <Repox> joannac, okay. Great. I added the new mongo instance on server A, and I'm getting this message when doing rs.status() on server a: http://pastie.org/private/ovqvrohuj0pbpubvxb2dg
[07:31:12] <joannac> yes, it's initialising
[07:31:12] <Repox> joannac, and the mongo log file on server A says "replSet total number of votes is even - add arbiter or give one member an extra vote"
[07:31:23] <joannac> yeah, don't have an even number
[07:31:34] <joannac> because your elections will fail
[07:31:42] <joannac> you can deal with that in a bit
[07:32:28] <joannac> any progress?
[07:32:36] <joannac> what do the logs on node B say?
[07:33:42] <Repox> joannac, hold on.
[07:36:22] <Repox> joannac, the log file on node B says this: http://pastie.org/private/ooj1iymuzm4lvitvn4hfw
[07:37:29] <joannac> yes
[07:37:35] <joannac> that's allocating files
[07:37:39] <joannac> to put all the data in
[07:38:14] <joannac> nothing to worry about yet
[07:47:52] <Repox> joannac, it seems that everything is done now. Trying to do a count on server B gave me this: "2014-06-03T07:44:27.046+0000 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:191"
[07:50:57] <joannac> yeah
[07:51:00] <joannac> rs.slaveOk()
[07:51:08] <joannac> we don't allow reads on secondaries by default
[07:51:13] <Repox> on server b?
[07:52:56] <joannac> yes
[07:55:03] <Repox> joannac, that worked. Awesome :) Will server A load balance read calls automatically to the slave?
[07:55:15] <joannac> no
[07:55:44] <joannac> http://askasya.com/post/canreplicashelpscaling <-- read that
[07:57:58] <Repox> joannac, so rs.slaveOk() is only for helping me read on the slave?
[07:59:17] <Gargoyle> Repox: replica sets are for HA, shards are for performance
[08:00:58] <Repox> Gargoyle, meaning with replicas (according the article linked) that replicas will take over if one replica crashes?
[08:01:12] <Gargoyle> Yes.
[08:03:05] <Repox> Gargoyle, reading http://docs.mongodb.org/manual/reference/method/rs.slaveOk/ (please correct me if I'm wrong) I should be able to use rs.slaveOk() on the primary replica to allow read operations on other replicas during my connection?
[08:03:36] <Gargoyle> Yes you *can*.
[08:03:58] <Gargoyle> But it's not the primary's job do dish out the load.
[08:04:38] <Gargoyle> That's down to your driver and/or application to make a decision to read old, stale data from a secondary.
[08:05:51] <joannac> also, you mostly shouldn;t, as per that article I linked
[08:06:03] <Repox> Gargoyle, I see. That makes sense.
[08:06:32] <Repox> joannac, Yeah, I understand. You really helped out with the replication. Thank you very much.
[08:17:54] <Garo_> Does mongos yet have the option to direct slaveOk queries away from a slave which is lagging behind on replication?
[08:56:09] <iggyZiggy> #mongoosejs guys are still asleep, can anyone help me out with empty array issue using populate method?
[08:57:49] <iggyZiggy> guess not
[09:12:55] <Nomikos> Hello all. I have a subdocument/object with 5 string keys and numeric values, is there a way to fetch all documents where the sum of these values > 0?
[09:13:24] <Nomikos> In other words, find all documents where any of the subdocument values are 1+?
[09:13:43] <Gargoyle> Nomikos: Use an or?
[09:14:12] <Nomikos> Gargoyle: hm, but the array will get more keys in the future
[09:15:04] <Gargoyle> Nomikos: Caculate the sum when you update the doc and store it in another field.
[09:27:24] <Gargoyle> Nomikos: Another option would be to store them as name/value pairs. Eg. doc.sub = [{name: foo, val: bar},{name: baz, val: morebar}]
[09:27:57] <Gargoyle> You should then be able to do doc.sub.val: {$gt: 1} type of query.
[09:28:08] <Nomikos> And then I could do doc.sub.val: $gt: 1
[09:28:14] <Nomikos> .. right, that one :-)
[09:28:23] <Gargoyle> If you need to be specific, then $elemMatch can help
[09:29:47] <Garo_> iggyZiggy: you should just ask your question, instead of asking if anybody can help
[09:32:17] <iggyZiggy> i thought it was, i'm using mongoosejs and calling populate method with goal to list all ref'ed objects, problem is... i'm getting empty array
[09:43:20] <anchnk> Hi, does anyone knows if it's possible to build or install mongodb on a Synology NAS (cedarview arch) ?
[09:43:39] <Derick> Just don't use NFS with it
[09:44:55] <anchnk> Derick was for me ?
[09:45:16] <Derick> yes
[09:46:42] <anchnk> ok for now i'm not even sure which version i should grab
[09:47:59] <anchnk> Here are some specs: Intel Atom D2700 Dualcore (2C/4T) 2.13GHz x86 Processor 64-bit@DDR3, 1GB of RAM
[09:48:14] <anchnk> i should just get the linux tar file, SSH to NAS, uncompress and run ?
[09:48:15] <kali> sounds like 32bit
[09:48:32] <kali> or not :)
[09:48:37] <kali> http://ark.intel.com/products/59683/Intel-Atom-Processor-D2700-(1M-Cache-2_13-GHz)
[09:48:44] <kali> 64bit :)
[09:48:51] <kali> so linux/64bit
[09:49:26] <anchnk> no need to build it then ?
[09:50:13] <kali> i don't think so... worth a try, anyway
[09:50:33] <anchnk> yup i'm gonna try it
[09:51:14] <anchnk> thx guys
[09:56:34] <dsirijus> can mongodb's array contain multiple different documents from different collections and different schemas?
[10:00:38] <kali> dsirijus: what ?
[10:03:16] <iggyZiggy> kali, do you use mongoosejs?
[10:07:41] <iggyZiggy> dsirijus is my boss, we're wondering if this is doable, having an array with multiple references to models, example: platforms:[{type: ObjectId, ref:"facebook"},{type: ObjectId, ref:"notFacebook"},... ] so it can be used with populate method, like so: ...populate(["platforms",... ?
[10:11:50] <netQt> Hi all. How do I use group by when using find() on collection?
[10:12:50] <netQt> here is my query collection.find {userId: {$in: ids}}
[10:13:00] <netQt> I also need to add group by
[10:18:52] <rspijker> netQt: use aggregation? $match and $group
[10:19:30] <netQt> can I do select all in aggregation?
[10:20:00] <netQt> I'm trying and it returns the field I'm grouping by
[10:24:30] <tobyai> Hi there! I recently updated to mongodb 2.6 and I am seeing an unusual slow connection setup (500ms, localhost). I am using mongoose 3.8.8 with mongodb 2.6.1
[10:24:30] <tobyai> . Any ideas what could cause this?
[10:56:50] <kali> iggyZiggy: god, no.
[10:58:54] <iggyZiggy> no to mongoose or the second question?
[11:03:43] <kali> iggyZiggy: no to mongoose
[12:05:53] <dsirijus> kali: you have some specific hate for mongoose?
[12:06:15] <dsirijus> we're not really experienced with dbs in general, it looks like something that gets us up on the right track
[12:07:55] <Mhretab> Hi, I have problem with dates on mongodb, when I insert using new Date() the month is one more than I specified and the dates are actually one less. So for example when I insert 2014,4,2 it shows as 2014,5,1
[12:08:19] <Mhretab> does any one has suggestion on how to fix this?
[12:10:05] <ehershey> hm, how are you creating /viewing your dates?
[12:10:17] <ehershey> is it a problem related to javascript date objects using zero-indexed months?
[12:10:33] <algernon> Mhretab: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date
[12:10:33] <ehershey> i.e. for a current date, dateobj.getMonth() == 5
[12:10:56] <algernon> pay attention to the part where it says that months start at 0
[12:10:59] <kali> dsirijus: my problem is specifically with javascript, actually. as for mongoose, i think it is not a good idea to begin using mongodb (or any database) by using a ODM/ORM layer
[12:11:40] <algernon> (day starts at 0, too)
[12:12:02] <dsirijus> kali, i agree. but we're on a bit of a deadline and, not knowing mongo's best practices, we hoped some of them are baked into mongoose
[12:12:45] <Mhretab> ok thank you!
[12:12:53] <kali> dsirijus: to be fair, the paradign glitch between mongodb and mongoose is such that with all my good will, i can not understand your question :)
[13:01:35] <Kermit_> hi, anybody home?
[13:09:47] <narutimateum> can i put unique in embedded document??
[13:22:35] <michaelchum> Hi, I wrote a script to insert 60 million documents into one collection, it runs fluidly until 2 million docs, and then pauses every 5min, thereafter stops, and the log says Mon Jun 2 18:27:45 [clientcursormon] mem (MB) res:310 virt:13046 mapped:6094
[13:22:41] <michaelchum> Anyone has an idea?
[13:24:07] <cheeser> nothing in the logs?
[13:25:45] <bschrock> exit
[13:27:23] <michaelchum> Yeah, at the end it says Mon Jun 2 18:27:45 [clientcursormon] mem (MB) res:310 virt:13046 mapped:6094
[13:27:26] <michaelchum> Multiples times!
[13:27:48] <cheeser> hrm. strange.
[13:27:51] <cheeser> i dunno
[13:28:00] <cheeser> it just stops inserting altogether?
[13:28:12] <michaelchum> Yeah
[13:28:18] <cheeser> you might try removing indexes before inserting and then recreating them.
[13:28:40] <michaelchum> Before that it pauses every 5min and suddenly inserts for 30seconds, but later on, it stops overall
[13:29:09] <michaelchum> Are indexes added automatically? I'm creating a brand new collection
[13:30:06] <cheeser> only for _id
[13:30:31] <NodeJS> Is it possible to make subquery before updating document?
[13:30:46] <cheeser> a nested query? no.
[13:31:02] <michaelchum> Oh ok but I did not specify _id thus I believe, mongoDB automatically adds _id and indexes it right?
[13:31:38] <NodeJS> cheeser: I mean I need to make there two command via single query http://stackoverflow.com/questions/24009553/how-to-query-and-update-document-by-using-single-query
[13:38:32] <NodeJS> cheeser: Is it possible?
[13:40:31] <narutimateum> how do you update embedded document? l
[13:41:55] <rspijker> narutimateum: same way you update a regular field…
[13:42:55] <cheeser> NodeJS: findAndModify?
[13:43:19] <NodeJS> cheeser: but how?
[13:43:36] <cheeser> how what? findAndModify() is a function
[13:43:44] <narutimateum> this says duplicate key
[13:43:45] <narutimateum> db.product.save({"_id":"ObjectId(538b3f18c4e360de52fe4e3f)","stores":{"_id":"1","available":true}})
[13:43:58] <NodeJS> cheeser: did you my post on SO http://stackoverflow.com/questions/24009553/how-to-query-and-update-document-by-using-single-query ?
[13:44:52] <cheeser> what about it?
[13:45:32] <NodeJS> cheeser: I've described that I need in detail
[13:46:13] <NodeJS> cheeser: before undating the document I need to check that there are no date overlaps
[13:46:18] <cheeser> no, i don't think you can do that in one query
[13:46:51] <NodeJS> cheeser: but maybe there is some hacks to do it?
[13:49:33] <cheeser> i'm sure there are
[13:50:09] <rspijker> I don;t see any way to do that in a single query without changing your data model
[13:50:13] <NodeJS> cheeser: can you advise semthing?
[13:50:20] <cheeser> i got nothing
[13:50:33] <NodeJS> rspijker: How should I change it?
[13:52:17] <rspijker> add an overlapping flag to the documents and put the logic for that in the remove and insert functionality of your app
[13:52:35] <rspijker> but that only makes sense if you need that information very often and you don;t do loads of inserts/removals
[13:52:54] <rspijker> or updates
[15:00:12] <gancl> Hi! Why my mongodb findAndModify can't insert or update? http://stackoverflow.com/questions/24017372/nodejs-mongodb-findandmodify-increase-score-doesnt-work
[15:05:28] <rspijker> that’s not correct syntax in mongo gancl ...
[15:05:33] <rspijker> might be in node.js, no clue
[15:06:04] <kali> gancl: try to transpose it in the shell to see if it's a database issue or a node issue
[15:06:19] <gancl> kali: OK
[15:46:31] <jdrm00> quick question ...about objectid validation that occurs in some drivers (single String of 12 bytes or a string of 24 hex characters)...is that restriction something mongo related ...a good practice...or something I must comply in order to avoid problems?
[15:53:53] <rspijker> jdrm00: an _id can be *anything*… an ObjectId is 12 binary bytes, or a 24 byte hex string
[15:57:25] <rspijker> if you want to construct a specific ObjectId based on a string, it needs to be a 24 character hex string
[15:57:34] <rspijker> otherwise the constructor will simply throw an exception
[16:03:30] <jdrm00> and in general terms when I should be using and ObjectId and not just any string as _id?
[16:06:11] <cheeser> personally,i think unless you have a compelling reason to use a specific type for an ID, you should jsut use ObjectID
[16:08:37] <josias> Hi, am I right here for morphia questions?
[16:09:50] <josias> there is a stackoverflow question that discribes my problem but it is unanswered: http://stackoverflow.com/questions/18791045/mongodb-elemmatch-subdocuments
[16:23:00] <josias> no morphia-expert nobody here?
[16:37:05] <josias> ok I try to break the problem down to MongoDB: Why can't I use subdocuments in $elemMatch? e.g. db.Journey.find({ "stops" : { "$elemMatch" : { "point" : { "key" : "Master:99651"},"time" : 1234}}}) that doesn't work if the subdocument has more than one Element
[16:39:31] <og01> Hello, I want to aggregate and group by multiple fields, can I use some sort of composite key in the _id?
[16:42:05] <og01> ie, $match: {name: "something", time: {$gt: sometime}}, $group: {_id: "$dunnowhatgoeshere", totalScore: {$sum: "$score"}}
[16:42:47] <og01> presume time is per hour, so in this contrieved example I want to get the score for all hoursafter sometime
[16:43:40] <og01> oh nevermind, i've found the solutions
[16:44:16] <og01> _id: {name: '$name', time: "$time"}
[17:44:02] <michaelchum> Does anyone know why after 3 million inserts, it pauses 2 minutes before inserting then pauses again, etc?
[17:56:22] <nicolas_leonidas> how do I make sure only authenticated users can access my instance?
[17:56:44] <Nodex> michaelchum : fsync?
[18:00:24] <gancl> collection.findOne(.., function(err, doc) Can I collection.findOne again in the result "doc"?
[18:00:41] <Shapeshifter> Hi. If I have 3 documents like this: {r: 1, i: 1, d: "foo"}, {r: 2, i: 1, d: "bar"}, {r: 1, i: 2, d: "baz"}, what kind of query can I use to retrieve those documents, which, among the other documents with the same i property, have the highest r property? In this example the 2nd and 3rd document would match, because even though there's another document with i: 1, it's r is lower than the other document with i: 1.
[18:06:53] <LesTR> Shapeshifter: db.foo.find({"i": "1"}).sort({"r": -1}).limit(1)
[18:07:22] <Shapeshifter> LesTR: I also want the 3rd document which has i: 2
[18:07:40] <LesTR> so remove the limit
[18:07:51] <Shapeshifter> Basically for any given i, I want *that* document, which has the highest r
[18:08:31] <Shapeshifter> LesTR: wouldn't that query give me one document at a time, sorted by decreasing r?
[18:08:38] <Shapeshifter> as in, eventually all documents?
[18:08:52] <Shapeshifter> without the limit
[18:09:00] <LesTR> nope, return all documents which has i = 1, sorted by r
[18:09:18] <LesTR> but be carefull about indexes and db size : )
[18:09:45] <LesTR> sort may be very expensive operation
[18:11:11] <Shapeshifter> mhh. but that's not really what I meant. Imagine... a car maker which has different models ("i"). For every model it has different releases/versions ("r"). One document contains information about one version of one car. I would like to retrieve those documents containing data about the newest release of every model (but not multiple documents on the same model).
[18:11:38] <Shapeshifter> so: "any i, but highest r for any i"
[18:12:38] <Shapeshifter> e.g. given combinations i1 i2 i3 i4 a1 a2 b1 b2 b3, the query should return i4, a2 and b3
[18:13:15] <Shapeshifter> probably group?
[18:13:34] <og01> Shapeshifter: can you pastebin a more meaningfull document?
[18:17:42] <LesTR> ehm, got it and trust me, you don't want do this in mongodb :D
[18:18:07] <LesTR> maybe you can try double sort
[18:18:32] <LesTR> db.foo.find().sort({"model":1, "release-day":-1})
[18:19:26] <Shapeshifter> og01: not really... I'm currently thinking about how to model the problem and if I can effectively query the data afterwards. The problem is this: For a very large number of "nodes", I need to store some data. The whole graph of nodes goes through many cycles but in every cycle, the data of only a few nodes changes. So if I were to store the data for every node for every cycle, I would have a huge amount of duplicate data. Instead, ...
[18:19:32] <Shapeshifter> ... I think I could store the data only when it changes. But this means that in order to answer the query "give me the data for all nodes at cylce X", I need to be able to effectively determine which data is the newest for a given node.
[18:20:12] <og01> Shapeshifter: wait a sec
[18:20:24] <Shapeshifter> The alternative would simply be to store the data separately but still write a tiny document containing only the node id, revision id and data id - so this would still be duplicated many times (with different revision id) but the query would be trivial
[18:20:57] <Shapeshifter> but for thousands of revisions of millions of nodes this can still amount to gigabytes of data
[18:21:34] <Shapeshifter> although finally I thought I could do the first - *but* keep a fringe of newest data. so the newest state can be retrieved immediately and then I go back from there if required.
[18:21:40] <Shapeshifter> that would be a nice compromise
[18:22:07] <og01> Shapeshifter: this isnt what you mean is it? http://pastebin.com/tW5AP0JK
[18:22:29] <Shapeshifter> oh yes! exactly
[18:22:31] <Shapeshifter> thanks
[18:22:54] <Shapeshifter> would I need to keep an index on _id and year for this to be quick?
[18:23:43] <og01> Shapeshifter: err index on model and year
[18:23:58] <Shapeshifter> uh, yes of course.
[18:23:59] <og01> Shapeshifter: im new to mongodb, and i havnt actually tested that
[18:24:01] <Shapeshifter> thank you
[18:24:22] <og01> but thats my understanding of what should work
[18:24:45] <og01> I've written a handfull of queries similar to that one in the code im working on atm
[18:26:58] <og01> Shapeshifter: im going to test that now
[18:27:09] <og01> Shapeshifter: just because i want to be certain in my knowledge
[18:27:13] <Shapeshifter> cool
[18:28:49] <og01> yes worked as expected
[18:28:53] <Shapeshifter> nice
[18:56:11] <SeptDePique> hello all. one stupid question. please answer: is it possible to execute complex queries (like in SQL) on a MongoDB or are there any restrictions?
[18:57:18] <og01> SeptDePique: complex is subjective, but yes you could create quite a large and mind bending query in MongoDB
[18:58:08] <og01> SeptDePique: I'm still new, to mongodb, and i would say the first "limitation" i came across was that there is no Join equivelent
[18:59:11] <og01> but thats not to say that MongoDB isnt powerfull, it has a great aggregation/mapReduce pipeline that is very usefull
[18:59:42] <og01> and document based records with nested data can work very well and simplify some situations where joins would be needed in sql
[19:00:01] <SeptDePique> og01: ok... does that also imply that you can't use something like "SELECT t.x, s.y FROM t, s WHERE t = s"??
[19:00:26] <og01> thats a join, and no you can't
[19:00:59] <og01> but wait! that doesnt mean that MongoDB isnt for you
[19:01:34] <og01> the data of s could be nested into t to great effect
[19:01:59] <og01> or alternativly you could accomplish the same thing with two statements if absolutly nessesary
[19:02:42] <SeptDePique> oh that is hard to decide because i still don't know the exact data structure
[19:03:23] <og01> well mongodb is nice for rapid prototyping as you dont predefine your schema
[19:03:38] <og01> you can change the structure of your documents/collections on the fly
[19:04:03] <SeptDePique> ok that's a good argument. thx
[19:04:08] <og01> as i mentioned though, im new to MongoDB, we're using it in my workplace just to experiment and learn
[19:04:29] <og01> (our development team is always trying to keep uptodate and try new things)
[19:05:10] <og01> my experience so far is that it is a pleasure to work with, and both MongoDB and trad SQL have their own strengths and weaknesses
[19:05:35] <og01> and must be worked with in slightly different aproaches
[19:06:09] <og01> I would say a someone who is used to working with json and other highlevel languages, it feels very natural to use
[19:06:21] <og01> but that sometimes i want to do a join.... and cant...
[19:06:41] <SeptDePique> lol
[19:08:02] <og01> performance wise it has been great though, not that my datasets are particularly large, its working great
[19:08:46] <og01> do you have any other specific questions?
[19:09:20] <SeptDePique> Actually I think about using MySQL, Postgre or MongoDB. I don't know. The only thing that makes me hesitate is that I want to make queries over multiple table with specific restrictions. don't know if Mongo will make trouble
[19:10:29] <og01> the thing you might want to consider is that ultimatly your data will be nested, and that you would need less tables
[19:10:36] <og01> and as such less joins
[19:11:41] <SeptDePique> ok, I hope that will be possible and suitable in all cases. thx again for your answer.
[19:12:18] <og01> the only real solution in the end when you need to join, is to make multiple queries
[19:12:24] <og01> one for each collection (table)
[19:12:54] <og01> there are (in some languages), extensions that make this process more automatic
[19:13:00] <og01> (see DBRefs)
[19:14:27] <SeptDePique> ok
[19:16:13] <og01> feels bad from a SQL background to do that, but it actually works ok, and in the majority of cases, the quering and aggregation ability of nested data in mongodb is so powerfull and quick its actually really and easily a good trade off for not having joins
[19:18:03] <michaelchum> Does anyone know, in python with PyMongo, if I do db.insert(mydocument), will it directly insert into MongoDB and wait until it is inserted before continuing my python code or it's all pushed to Mongo with a buffer?
[19:18:16] <mongonoob> hi!
[19:19:21] <mongonoob> how would one go about inserting a file in a document?
[19:19:24] <mongonoob> bson?
[19:19:27] <SeptDePique> yes og01, I am quite used to SQL and it feels strange to do without JOINS
[19:19:59] <og01> mongonoob: there is a document size limit which makes storing a file into the db a bit tricky
[19:20:21] <og01> mongonoob: you should look at GridFS
[19:20:25] <mongonoob> og01: i think its 4mb, but my docs will never exceed that
[19:20:33] <mongonoob> my docs will be in the kbs
[19:21:06] <og01> mongonoob: if your certain, then i guess that you can just store the data in any field without special consideration
[19:21:24] <mongonoob> mongonoob: i see. i don't need to encode it or anything?
[19:21:39] <mongonoob> og01: lol. i see. i don't need to encode it or anything?
[19:22:03] <og01> mongonoob: (disclaimer: im new to mongodb), i dont see why you would need to encode the data
[19:22:50] <mongonoob> og01: im new too :). im using pymongo. i have a file type. so im assuming i can just use the insert function to insert the file as is? ill try it out and see what happens
[19:22:50] <og01> mongonoob: as it is encoded as bson by mongo itself, this isnt some interpreted query language
[19:23:11] <mongonoob> og01: ohhh, mongo encodes it with bson automatically?
[19:23:50] <og01> well thats how its stored in the database, but my point is that there isnt an interpreted query language that you need to worry about like you have with SQL drivers
[19:24:01] <mongonoob> og01: k thanks!
[19:24:03] <mongonoob> ill try it out
[19:24:42] <og01> mongonoob: i write perl and nodejs, and i dont really know python, but you'll need to read the data into a variable, you can't pass it a filehandle for instance
[19:24:59] <mongonoob> right right
[19:34:12] <gancl> Hi! native-Mongodb node.js how to findAndModify insert new datas to sub document? http://stackoverflow.com/questions/24017372/nodejs-mongodb-findandmodify-exception-e11000-duplicate-key-error-index
[19:36:15] <og01> gancl: $set: {'nested.field.name': "Whatever"}
[19:36:23] <og01> gancl: you mean like that ^
[19:37:35] <og01> gancl: $set: {'nested.field.name': {sub: "Document"}}
[19:37:53] <Egoist> Hello
[19:38:52] <og01> Egoist: Hello
[19:39:25] <Egoist> I want to deploy shard cluster. Is there any way to check that configuration server are working from mongos instance?
[19:39:34] <gancl> og01: I've set "$set:{'nested.field.name': {sub: "items.men"},", but still the same error :exception: E11000 duplicate key error index: g1bet.lists.$_id_ dup key: { : ObjectId('538e1588b3867927cbc1dba6') }
[19:40:19] <og01> gancl: oh maybe you need to $push into an array?
[19:40:49] <gancl> Yes, I want to push { "user" : "id3",
[19:40:49] <gancl> "score" : "400" }
[19:41:21] <og01> $push: {fieldname: {user: "blah"}}
[19:44:36] <gancl> If it already exist, I want to increase the user's score number.
[19:46:32] <gancl> og01: I've set "$push: {fieldname: {"items.men.user": sUser,"items.men.score":sScore}}" ,but still exception: E11000 duplicate key error index: g1bet.lists.$_id_ dup key: { : ObjectId('538e1588b3867927cbc1dba6') }
[20:35:12] <og01> gancl: can you pastebin an example of your document
[20:41:27] <grafjo> good day
[20:41:40] <og01> grafjo: and to you!
[20:41:53] <grafjo> just a small question regarding mongod.conf and verbose parameter
[20:42:01] <grafjo> on ubuntu 12.04 64 bit
[20:42:08] <grafjo> using mongo 2.6 server package
[20:42:12] <grafjo> setting verbose = false
[20:42:19] <grafjo> and service mongod start
[20:42:24] <grafjo> service mongod status
[20:42:29] <grafjo> mongod fails to start
[20:42:50] <grafjo> is this a common problem?
[20:43:51] <og01> grafjo: i dunno, all I can do it try it on my server
[20:43:56] <og01> one min
[20:44:28] <grafjo> og01: thx, let me know if it's working or not
[20:45:13] <grafjo> the strange thing is, on a centos 6.5 box, also running mongo server 2.6
[20:45:14] <og01> grafjo: its the same for me
[20:45:18] <grafjo> no problem
[20:45:20] <og01> failed to start
[20:45:29] <og01> nothing in log
[20:45:32] <grafjo> jep
[20:45:35] <grafjo> same here
[20:52:28] <grafjo> is somebody of the mongo staff around which can confirm this issue
[20:52:34] <grafjo> ?
[20:56:10] <cheeser> mongod --config mongod.conf
[20:56:10] <cheeser> 2014-06-03T16:54:30.399-0400 SEVERE: Failed global initialization: BadValue The "verbose" option string cannot contain any characters other than "v"
[20:56:24] <cheeser> verbose = v #works
[20:56:33] <cheeser> or verbose = vv
[20:56:40] <cheeser> more v == more verbose
[20:57:47] <cheeser> looks like the docs suggest that true works
[21:00:18] <cheeser> verbose = true works
[21:00:27] <cheeser> verbose = false does not
[21:05:06] <__ls> Could someone explain to me how it happens that for a read-only database (I disabled all writes to any database temporarily to test just-read performance), in which there is a decent number of connections (~1000), lots of queries are slow or time out (taking about 1 sec) while apparently waiting for a read lock? I thought read locks could be shared among connections? Is there a limit on how...
[21:05:08] <__ls> ...many connections (or queries?) can share a read lock?
[21:05:35] <grafjo> cheeser: thx for this hint
[21:11:04] <__ls> the end of a log line for one of these queries looks usually like this: "ntoreturn:5 ntoskip:0 nscanned:1592 nscannedObjects:0 keyUpdates:0 numYields:0 locks(micros) r:930424 nreturned:0 reslen:20 930ms"
[21:14:43] <__ls> as far as i can tell from mms, all queries are covered by indexes
[21:16:07] <Derick> it still had to scan 1592 documents...
[21:16:26] <timing> Hi!
[21:17:16] <timing> I want my full text search index to allow searching including stopwords. Is there a way to regenerate the index with the stopwords defined here: https://github.com/mongodb/mongo/blob/master/src/mongo/db/fts/stop_words_english.txt
[21:17:33] <timing> our index lists songs
[21:17:39] <timing> and now the song "all of me" cannot be found
[21:17:41] <Derick> no, stop words are hard coded
[21:17:56] <Derick> you need to recompile mongodb with an empty stop word list
[21:18:00] <timing> hmm
[21:18:07] <Derick> yeah, not advicable...
[21:18:09] <timing> am I the only one who wants this?
[21:18:17] <Derick> no, I don't think so
[21:18:26] <Derick> I think there is a ticket to custimize stop word lists
[21:18:36] <timing> also "home again" only searches for "home", because again is a stopword and I think that's a stupid choice
[21:18:44] <Derick> https://jira.mongodb.org/browse/SERVER-10062?jql=project%20%3D%20SERVER%20AND%20text%20~%20%22stop%20word%20list%22
[21:18:46] <timing> thanks
[21:19:57] <__ls> Derick: wouldn't nscanned=1592 mean that 1592 index entries were scanned? These are $in queries as well as sorted, so I wasn't sure if it was reasonable to try and bring these numbers closer to 0
[21:20:14] <__ls> also, if each individual query were slow, I wouldn't be seeing this behavior only under high loads?
[21:20:19] <timing> Derick: I really don't get the words chosen in the lists
[21:20:27] <Derick> timing: I don't either :)
[21:20:46] <timing> for example "mine"
[21:20:51] <timing> so you cannot search for mine craft
[21:20:51] <Derick> __ls: I'd need to see more info to make a comment about that
[21:21:13] <Derick> timing: it should still find that based on "craft" though
[21:21:53] <timing> Derick: sure bit it will put witch craft on top of mine craft, which makes no sense
[21:22:09] <timing> (for example)
[21:22:42] <Derick> timing: yeah, I know. So vote for that jira ticket
[21:22:43] <__ls> Aside from the specifics, I'm just hung up on this conceptual issue that I'm having ... if read locks are shared, and there are no writes happening, why are queries still waiting for read locks?
[21:22:55] <timing> Derick: will do, thanks!
[21:23:10] <Derick> __ls: sorry, don't know
[21:23:11] <__ls> I think I'm just not getting some additional constraint here, and I haven't found an answer although I've been googling myself to death ...
[21:23:37] <__ls> Derick: ok, I'll just hang around, maybe someone else knows
[22:22:17] <joneshf-laptop> having some issues with casbah and android interop
[22:22:47] <joneshf-laptop> i can set up a connection to mongolab from a terminal and query the db there no problem
[22:23:08] <joneshf-laptop> but when i try to do similar in the app it dies wanting a slf4j logger
[22:23:18] <joneshf-laptop> any ideas what to try here?
[22:25:12] <joneshf-laptop> actually scratch that, i get a warning on the terminal (scala)
[22:25:17] <joneshf-laptop> saying it'll default to noop
[22:25:57] <joneshf-laptop> but in android it's: java.lang.NoClassDefFoundError: org.slf4j.impl.StaticLoggerBinder
[22:27:27] <joneshf-laptop> or maybe just a pointer as to how to set up logging
[22:27:40] <joneshf-laptop> i don't might that, just wnat to make forward progress
[22:46:01] <proteneer_osx> why is your node driver such a horrendous piece of shit
[22:46:45] <proteneer_osx> jesus fucking christ
[22:48:24] <og01> proteneer_osx: why are you so rude?!
[22:48:37] <og01> proteneer_osx: profanities!
[22:48:55] <proteneer_osx> answer my question and i'll answer yours
[22:49:36] <og01> your question most likely wasnt directed at me, since I had nothing to do with nodejs' mongodb driver
[22:50:20] <og01> perhaps if you have some issue with it, you can raise an issue on github
[22:50:40] <og01> or maybe resolve the issue yourself, and submit a patch
[22:51:29] <proteneer_osx> its just that the connect() method
[22:51:32] <proteneer_osx> is so goddamn weird
[22:51:51] <Derick> proteneer_osx: that's enough with the profanities
[22:52:25] <proteneer_osx> your python drivers are great, pymongo and even motor
[22:52:34] <proteneer_osx> esp with how it handles connection strings
[22:53:12] <proteneer_osx> but the requirement to use the native javascript driver requires using the db object inside a callback
[22:53:14] <proteneer_osx> is just silly and weird
[22:53:31] <og01> proteneer_osx: im afraid i have to disagree
[22:53:44] <og01> proteneer_osx: thats async event based programming for you
[22:53:52] <proteneer_osx> i use motor
[22:53:55] <proteneer_osx> for our python based backends
[22:54:03] <proteneer_osx> which is based off of tornado
[22:54:32] <og01> I dont know python based event loops im afraid, how does it function differently?
[22:54:36] <proteneer_osx> http://blog.mongohq.com/node-js-mongodb-and-pool-pollution-problems/
[22:54:46] <proteneer_osx> mainly i need to resort to the Alternative Fix
[22:56:14] <og01> proteneer_osx: that link is just someone talking the usual trials of working within an eventloop
[22:56:42] <og01> proteneer_osx: this is a normal everyday problem that you run into when using doing event based programming
[22:56:48] <proteneer_osx> no it's not
[22:56:54] <proteneer_osx> your other driver methods
[22:56:55] <proteneer_osx> Db()
[22:56:57] <proteneer_osx> open()
[22:56:58] <proteneer_osx> etc.
[22:57:01] <proteneer_osx> can all function sanely
[22:57:33] <og01> explain?
[22:57:45] <proteneer_osx> i can simply declare a var db = new MongoDB(...)
[22:57:52] <proteneer_osx> but that syntax doesn't handle connectionStrings
[22:58:13] <proteneer_osx> well
[22:58:14] <proteneer_osx> whatever
[22:58:17] <proteneer_osx> I'll figure it out
[22:58:45] <og01> I'll presume because there is no asyncronous callbacks required in those methods and/no object instanciation
[22:59:00] <og01> so they can be called in a syncronous fashion
[23:00:19] <og01> proteneer_osx: you might prefer to use promises to provide a more syncronous interface to asyncronous library
[23:01:17] <og01> proteneer_osx: as a side comment, nothing drives me more nuts, than someone implementing a library with blocking methods
[23:02:45] <og01> proteneer_osx: i havn't tried this, but here is a promise wrapper for mongodb: https://www.npmjs.org/package/q-mongodb
[23:02:58] <og01> proteneer_osx: are you familiar with promises?
[23:03:08] <proteneer_osx> i'm guessing they're similar to futures?
[23:03:33] <og01> proteneer_osx: im not sure
[23:04:27] <og01> proteneer_osx: a promise is an object that represents the promise of a value, which can be accessed via a callback
[23:04:45] <proteneer_osx> yeah that's similar to python's way of yielding futures
[23:05:20] <og01> proteneer_osx: well i dont know if they'll help you
[23:05:45] <og01> but it means you can open a connection once, store the promise, and then access that connectino at will via a callback
[23:06:50] <og01> proteneer_osx: many languages and libraries are aiming to fullfill the A+ spec
[23:06:52] <og01> http://promises-aplus.github.io/promises-spec/
[23:07:07] <og01> in order to create a common promise interface
[23:08:03] <og01> I'm afraid im a node/perl guy, and not python, but i know that both of these languages have implementations of the A+ spec
[23:08:59] <og01> anyway this is a sidetrack from your origional err.. statement. I don't know if anyone related to mongodb care to back me up at all
[23:09:20] <og01> but in my mind the libraries interface is correct for event driven language such as nodejs