[00:46:35] <fels_zh> http://pastebin.com/ZL8vyrjd this fails with the error: SelectMany is not supported - but as you can see I'm not using that here - I did before now I am not
[00:52:34] <Boomtime> @fels_zh: i'm having trouble finding any Mongodb related code in there, it appears your problem is with LINQ and not related to mongodb at all
[00:53:28] <Boomtime> which C# driver version are you using?
[03:11:43] <Flightless> hey guys, quick question. I'm pretty new to MongoDB here and I'm wondering if the following schema is possible/viable: My database needs to store the set of words each user knows. In SQL I'd just have a user table and a words table do a join. Since MongoDB doesn't have joins my instict is I'd be best off making knownWords a subdocument of users. If I do this would I be able to create a query that would return specific words for
[03:23:19] <Boomtime> -> "store the set of words each user knows", you mean their entire vocabulary?
[03:29:35] <Flightless> precisely, I need the database to store every word the user knows.
[03:38:36] <Flightless> The key problem is I need the ability to search for a specific word. Since the number of known words can get very high I'm going to want indexing to search through the subdocuments. Is that even possible with mongo?
[03:39:07] <Boomtime> yes, but the number of words is extreme
[03:41:56] <Boomtime> the trouble with this is that "words" array is going to get very large, and manipulating arrays beyond a few thousand entries becomes problematic
[03:46:53] <Flightless> maybe I misunderstood something but my understanding of mongo structuring was that nesting documents is the way to get around no table joins. if I have a BSON structure like: { username: "blah", words: [{wordId: "123", definitionId: "321"}]} I'll really run into an issue at around one thousand pages?
[03:52:52] <haploid> Ok, here is some strange behavior. if I insert 50k documents letting mongo assign the _id, everything is fine. If Ido the same with a user-generated(and verified unique ) _id, it only inserts 6 of those documents, and reports no errors with safemode on
[05:51:46] <LinkRage> when mongodb2.4.x is run with --auth... can I force it to NOT use authentication for specific user? I'm trying to remove the auth messages in mongod.log...
[10:49:37] <whyhankee> hi all, i'm migrating a MongoDB from 2.4.9 to 2.6.4 and i'm having trouble with a query that is returning no documents in mongo 2.6, in 2.4 the query is fine. The query is here: http://pastebin.com/ZVqsWt0R
[10:49:54] <whyhankee> is there something i'm missing?
[13:09:29] <tscanausa> Today is a scary day, I am shutting off old shard =/
[13:13:38] <jordana> tscanausa: I'm syncing 500 million documents :(
[13:14:15] <jordana> migrating away from SSD's *cries*
[13:19:20] <tscanausa> The last time I did this a portion of my data disapeared.
[13:42:05] <boo1ean> Hi everyone, I'd want to get some suggestions about which db is best to work with large statistic datasets (12 columns available for different group combinations and filtering)? Or is there some good solutions to solve that and have good performance?
[14:02:15] <jordana> Is there anyway to force a mongod repliaset node out of STARTUP2 into SECONDARY mode?
[14:02:37] <jordana> It's done with the indexes and from what I can tell it's not do anything
[14:02:46] <jordana> It's synced all the data but seems to have got stuch
[14:41:26] <prbc> Is there an easy way to find docs matched with two or more different values? without making two queries and merging them?
[14:41:41] <tscanausa> jordana: give it some more time
[14:42:13] <vrkid> how do I setup ldap authentication when the OpenLDAP server I contact acts as a LDAP proxy to an AD (Windows Server 2008) LDAP?
[14:44:27] <jordana> tscanausa: well I left it over night last night this same thing happened. I rebooted the mongod instance and thinking it would sort itself out and it just started the sync again
[14:45:03] <tscanausa> oh i think the op log is too far behind… try rsyncing the data files?
[14:48:17] <jordana> Yeah, I've stopped all processing on our primaries and started the resync again. Hopefully it'll sort itself out
[14:50:18] <jordana> Yeah I know but I'll give it one more shot.
[15:27:01] <tscanausa> Woot old shards are down and no alerts or missing data
[15:42:24] <hydrajump> anyone running mongo in docker containers for production?
[17:20:38] <linuxboytoo> I am working on a legacy mongo system (2.4.5) that I adopted - looking to add a replica to the set. It is not sharded but they are using mongos on the clients to connect. If I do the rs.add on the primary mongod so I need to do anything for the configservers to get the updated hostnames for mongos?
[18:07:52] <nicolas_leonidas> I have a mongodb instance that is just too busy CPU usage is so high... Is it replication that I need or sharding?
[18:40:09] <tscanausa> nicolas_leonidas: did you get an answer?
[18:49:41] <tscanausa> Replication generally only adds fail safes for data ( and will add more cpu usage ) . sharding could be a solution or a bigger server is another and possibly better option
[18:54:50] <nicolas_leonidas> is it possible to have master/master replications with mongo? so all instances can both write and read?
[18:57:41] <redblacktree> Using the mongo profiler, I'm having some trouble understanding the output. An update operation has an nscanned of 100663554. First of all, that seems like a very large number to scan in any circumstance (100 million??). Second, the collection being updated only has 25 documents. Am I reading this wrong? Can anyone explain what might be happening?
[18:59:46] <tscanausa> nicolas_leonidas: no. you would need a second shard.
[19:35:46] <Diplomat> yes, totally forgot that, thank you
[19:36:53] <redblacktree> can anyone suggest ideas about why I might have very large nscanned values (>100 million) using mongo profiler on update to a very small collection (25 documents)?
[19:37:33] <redblacktree> the query isn't super slow. It took 63 ms
[19:40:38] <rafaelhbarros> how does mongodb textScore works?
[19:41:37] <Diplomat> Weird.. it still doesn't work
[21:15:12] <ramfjord> actually when I use the driver directly to query the index is used
[21:15:20] <ramfjord> somewhere in the Mongoid layer
[21:16:58] <tscanausa> so ti sounds like the client or framework
[21:36:20] <freeone3000> I'm using MMS for montioring on a 8-server replicaset with 7 voting members, two of which have non-zero priority. I'm seeing around 800 connections; is this odd?
[23:10:45] <ramfjord> explain generates the exact same plan, with nscannedObjects: 1, both using the index
[23:10:57] <ramfjord> the only difference is scanAndOrder
[23:19:21] <ramfjord> if I hint it to use the id_from_source index, the query is fast even with the source
[23:20:00] <ramfjord> it generates the exact same explain once again, except that nscannedObjectsAllPlans and nscannedAllPlans are both 1 instead of 15 and 18 respectively
[23:33:59] <hurdcatzDOTcom> In honor of the recent Oracle announcement, http://hurdcatz.com now points to MariaDB
[23:57:33] <ramfjord> So, we had one useless index
[23:57:49] <ramfjord> and somehow the planner was querying against it before deciding on the correct plan?
[23:58:02] <ramfjord> deleting it caused a huge speedup