[00:06:13] <fedora_newb> joannac, maybe I can get your help on one more item here, http://screencast.com/t/QmemJ5GBN
[00:07:31] <joannac> what are you connecting from? full connection string?
[00:08:00] <joannac> i predict you need to read the docs on bind_ip
[00:12:55] <fedora_newb> joannac, using mongochef to connect
[00:13:05] <fedora_newb> although, you are probably right, will take a read
[02:45:38] <fedora_newb> Parameter 1 to MongoCollection::insert() expected to be a reference, value given. I am not finding any sort of clean solution for this anywhere. Anyone have an idea on this one?
[03:43:41] <nahtnam> What are the use cases for mongo, or rather, is there a list?
[03:44:27] <nahtnam> I'm not sure if I should use it for my app
[03:45:54] <nahtnam> It seems like a strong dB for parts of my app but seems weak for other parts
[03:46:59] <nahtnam> But on my site, I'll be parsing protobuf files of matches that contains a list of kills, deaths, wins, losses, and it seems like mongodb would excel at storing match info...
[08:01:28] <zamnuts> brucelee, that warning is an issue with the specific hardware/OS that node is running on
[08:01:41] <brucelee> ubuntu 14.04, aws with raid 0
[08:02:30] <zamnuts> the readahead value says how much EXTRA data should be read on any given read; this is beneficial because sequential operations are always faster than random ops
[08:03:43] <zamnuts> a large readahead is great when there are large documents that extend beyond the block size, i.e. where sequential reads are helpful; but a large readahead is wasteful on small random I/O
[08:41:39] <ahihi> hi, I want to update an array in a document in a manner that depends on its current state - if the array's last element matches a certain condition, I want to update that element, otherwise push a new one. is this possible to do without two round-trips to the DB?
[11:51:12] <Zelest> Someone apparantly dislike MongoDB and decided to rant about it..
[11:51:52] <Zelest> They're absolutely right about "never use a database because 'everybody else does', do your own research as to the advantages and drawbacks of a particular database." though..
[11:52:07] <Zelest> I guess that's what they did.. without doing it right.. and apparantly they weren't happy with the results.. *shrugs*
[11:53:38] <pagios> Zelest: hmm so mongo is better than otgres
[11:55:29] <Zelest> Use whatever database you feel more comfortable with.
[11:55:58] <Zelest> I love Postgres as much as MongoDB but for various things I find one of them better than the other.
[11:56:36] <Zelest> For example, the automatica failover feature in MongoDB's replicasets is lovely for high-availability and insanely simple to setup..
[11:57:05] <Zelest> When it comes to fulltext search, I prefer PostgreSQLs tsearch2.. Lookup your needs and decide there after.
[12:24:27] <fedora_newb> Curious if anyone here uses laravel and mongo together that may be able to help with this error, Parameter 1 to MongoCollection::insert() expected to be a reference, value given
[12:24:41] <fedora_newb> Doesn't seem to be too much out there on it.
[12:30:24] <sedavand> is it possible to do a .find() query that matches against a list in another colletion, without having to write a JS function?
[13:06:25] <Rumbles> Hi, one of our devs has asked me to set up an arbiter on 2 nodes (of 3) in our dev environment mongodb cluster, I have checked the docs and they say not to do this. He has said he has done this on 2.6 and had no troubles. Now I don't want to make a load of work for myself and I want to find out what I could break if I do this, can anyone advise why it is advised not to set up an arbiter on a live mongodb node? Or am I no
[13:06:25] <Rumbles> t understanding something? I'm pretty new to mongo (but the dev has been using it for years)
[13:06:58] <Derick> Rumbles: so, out of the three nodes, 2 will be an arbiter and 1 a data carrying node?
[13:07:05] <Derick> there is no point in doing that
[13:07:19] <Rumbles> no 3, will have data, but 2 will also have arbiter
[13:07:46] <Derick> and the docs are right, you shouldn't do that
[13:09:54] <Rumbles> okay, can you tell me why you shouldn't?
[13:10:35] <Rumbles> just so I can have tis conversation with him (other than it is pointless) :)
[13:10:39] <Derick> 1. there is no point in doing it. The idea of an abiter is to be able to vote if there is not a majority. Putting an arbiter on two of your three live nodes does not help
[13:11:57] <Derick> no matter which physical node would go down, there need to be two nodes still up to vote... and you already have 3 nodes so that works just fine (with one node down). You can't reach a majority of *two* physical nodes go down even with the arbiters on them
[13:30:43] <Rumbles> Derick, one of the devs (well our CTO) just went ahead and did it, and I got the infrastructure slightly wrong, we have 3 nodes, and 3 arbiters, they wanted one arbiter setting up on an active member...
[13:30:57] <Rumbles> anyway, I'm not responsible for breaking anyhting so I'm happy :)
[13:36:37] <Derick> Rumbles: 3 nodes, and 3 arbiters?! that makes no sense either really
[13:41:53] <Zelest> Read the upgrade requirements.
[13:43:12] <valera> Zelest: I dont mean binary data, I mean bson dump created with mongodump
[14:30:24] <Rumbles> thanks Derick, like I said earlier, I didn't set it up, I get why it's pretty much pointless.... I'm guessing the advised way to use arbiters is if you have an even number of mongo nodes use 1 arbiter, otherwise if you have an odd number, you don't need any... ?
[14:31:02] <valera> from what I understood mongo is not checking utf-8 correctness on document insert, however map-reduce engine fails with exceptions if it spots broken utf-8 character sequence, is there any way to figure out what document caused an exception ?
[15:58:26] <Snicers-work> I need find documents in a collection that have matching fields. For example all users with duplicate emails, without knowing the values of emails.
[15:59:01] <StephenLynx> you can group and use the email as the _id
[15:59:15] <StephenLynx> and then increment a field of the aggregated document.
[15:59:39] <StephenLynx> after you finish grouping, use a $match to filter only the documents where the incremented field is > 1
[16:01:57] <Snicers-work> StephenLynx, what do you mean use the email as the _id, the _id is already set.
[16:19:40] <pembo13> how can I find out why a replicate set member is being vetoed from syncing?
[18:59:27] <devdvd> Hi all, quick question. Did the way capped collections work between 2.4.5 and 3.0.6 change? the reason i ask is because im trying to migrate from 2.4.5 to 3.0.6 and did a mongodump on 2.4.5, when i go to import into 3.0.6, i get this message "error creating collection searchCountsByMember: error running create command: exception : specify size:n when capped is true. Here is my searchCountsByMember.metadata.json {"options":{"create"
[18:59:27] <devdvd> . Now I see the problem and I thinki know how to fix it (i fixed it on another one like this by defining a size). Is that the proper way to go about it or am i missing something?
[20:02:14] <pokEarl> So I am a bit confused about the ObjectID vs just having the actual identifying string of the object id. Since the object id itself has more info hidden in it than just the string. If you send over id string of the object id through something like a database layer that sends Json, and then create a new object id on the other side with the same string identifier, does it matter?
[20:18:17] <arasmussen> anybody have an idea about http://stackoverflow.com/questions/32852646/find-and-update-multiple-documents-atomically ?
[20:18:42] <idd2d> I'm modeling a stock tip with MongoDB, and one of the things I need to track is the stock's value over time. Each stock tip will have a bunch of stock value snapshots taken every 10 minutes or so. I'm wondering whether I should associate these values via a relational ID, or simply embed the value snapshots in the stock tip schema.
[20:19:21] <idd2d> One requirement: I need to be able to query for a specific slice of the stock value snapshots. i.e. I want to see how this stock value fluctuated between Sept. 29 and October 2.
[20:19:42] <arasmussen> idd2d won't you have a huge number of these snapshots then? should probably each be their own document, no?
[20:20:46] <idd2d> We're still trying to work out just how often we'll be grabbing data, but it will be on the order of every 10 minutes for any given stock tip, for around a week, on average. That's about 1000 data points per stock tip.
[20:22:09] <idd2d> My initial thought is that, since each stock tip has just one set of value data, it makes sense to simply embed it. I see two issues, potentially: That's a lot of embedded data, and I don't know if I'll be able to effectively query for a time range within that data.
[20:22:18] <idd2d> I don't have enough experience with Mongo to answer those two questions.
[20:23:14] <idd2d> Also, is it possible to query for a stock tip *without* the embedded value data? That way we're not pulling all these data points on every request
[20:23:16] <arasmussen> yeah I would probably lean towards having a separate collection of snapshot data
[20:24:16] <arasmussen> yes you could unselect it with db.collection.find(query, {snapshotData: false})
[20:24:32] <arasmussen> then it won't include "snapshotData" in the response
[20:24:51] <arasmussen> but I'd still lean towards having a separate collection of snapshot data with {stockID, timestamp, value}
[20:25:25] <idd2d> The downside of such an implementation is the extra work of associating each datapoint with its stock tip
[20:25:31] <idd2d> Would this not be an issue, in this use case?
[20:26:07] <arasmussen> Which extra work? All you have to do is specify the stockID in the insert.
[20:26:48] <idd2d> Right, but when selecting, Mongo is having to do extra work behind the scenes to associate any data point with its tip, right?
[20:27:44] <arasmussen> Then you could easily query db.snapshots.find({stockID: 123, timestamp: {$ge: ONE_WEEK_AGO}}) kinda thing to get a list of snapshot datas from the past week
[20:27:56] <arasmussen> hmm, I'm not sure what extra work
[20:28:23] <arasmussen> or why you're worried about this, should be plenty performant
[20:28:24] <idd2d> haha, gotcha. My questions are based on only a cursory knowledge of the nosql world
[20:32:38] <idd2d> This is a pretty good representation of my (limited) knowledge of mongo: http://blog.mongodb.org/post/87200945828/6-rules-of-thumb-for-mongodb-schema-design-part-1
[20:33:12] <idd2d> In it, the author mentions three methods of relating data:
[20:33:20] <idd2d> Embedded documents, best used in a one-to-few relationship
[20:34:13] <idd2d> Where A has_many B, give A a list of ObjectID's representing its constituents (best used in one-to-many relationships)
[20:34:45] <arasmussen> Anybody know if there's a way to effectively figure out which documents you've modified? Or atomically find and update multiple documents?
[20:35:04] <idd2d> And where A has_many_many B, put a reference to A in B (best in "one-to-squillions", as the author calls it)
[20:35:40] <idd2d> It seems clear to me that embedded documents aren't a great idea, because there's just too many data points. So what about the other two options?
[20:36:54] <arasmussen> idd2d: yeah you'd want the last one where each of the snapshots has a reference to the stock it's associated with (A has_man_many B)
[20:42:37] <idd2d> Gotcha. Is there any benefit to using the "one-to-many" scenario specified in that article?
[20:43:00] <idd2d> I guess the author mentions a few of them, now that I look.
[20:48:05] <arasmussen> idd2d: sorry, trying to figure out my own issues :P
[20:48:16] <idd2d> No worries, you've been very helpful.
[20:48:32] <arasmussen> the convenience would be you'd be able to fetch all the data in a single document/query
[20:49:35] <arasmussen> but with two separate collections you'd be able to fetch everything in two separate queries, I don't see anything wrong with that
[20:50:35] <arasmussen> it might take up a bit more space but the whole thing is simpler, you don't have to unselect certain data, you don't have enormous documents, it's straightforward to do to queries like "timestamp > one_week_ago", and still super performant as long as you have indexes
[23:31:37] <blizzow> I have a database foo, with collection bar, with documents baz and bum. The baz and bum documents have two string fields, _id and queuedblock. I'm trying to do an eval statement from the CLI to return the value of queued block but it's not working. I can't even get a list of documents from this: mongo -u user -p password mongohost/foo --eval "db.bar.find( {} )"
[23:31:44] <blizzow> can anyone tell me what I'm doing wrong/