[02:26:33] <LouisT> oh, joannac, not sure if you're the one to talk to about this.. but on MMS, i get an alert saying that my mongodb instance is connectible from the internet, but i only run mongodb on localhost..?
[02:26:34] <bePolite> joannac: I'm just running 'mongo' from my terminal
[02:27:26] <joannac> bePolite: then that's why it's not working
[02:28:23] <joannac> you need to start the server process, so you ahve something to connect to http://docs.mongodb.org/manual/tutorial/manage-mongodb-processes/#start-mongod-processes
[02:29:35] <LouisT> i assume the mms service runs a mongodb instance as well.. do you think it might be connecting back to itself as i've added my server as a domain that points back to localhost?
[06:21:58] <geohot> hmm, so i was using the mongo bundled with meteor
[06:22:28] <geohot> i switched to the latest and it's fast, who knows
[06:22:42] <geohot> ~1 second for c, ~4 seconds for python
[06:55:51] <mehwork> in mongoose, how do you say: new mongoose.Schema({ things: {'foo': 'bar', 'baz': 'quux' } }); In other words, how do you make a schema of an array of type String?
[08:01:01] <Stiles> Hey guys. I'm using GridFS to store photos and pull them when they're needed. It seems that GridFS is REALLY slow at pulling the photos out though (like 2-3 seconds) the more I pull, the longer it seems to take. If I have like 5 on a page the first might take 2-3 then second is 3-4 third will be 4-5 etc..
[08:02:17] <rspijker> how big are the photos how large are your chunks?
[08:03:46] <Stiles> rspijker, the photos vary but the larger one I have seems to be 181917
[08:04:03] <Stiles> 90089 seems to be the largest actually
[08:37:50] <dandre> I can read on this page:http://docs.mongodb.org/manual/faq/fundamentals/ that 32 bits version of MongoDB are limited to 2G of data.
[08:37:50] <dandre> and also that 'do not deploy MongoDB to production on 32-bit machines.'
[08:37:50] <dandre> I wonder if in my case where 2GB limitation isn't an issue, it is safe to use 32bit version of mongodb
[09:13:53] <W0rmDr1nk> dandre, there is some things I read about 32 bit mode that sounded scary - but cant remember specifics
[09:14:09] <W0rmDr1nk> dandre, I think it silently drops data once its gets too big
[09:14:18] <W0rmDr1nk> dandre, why not run in 64 bit ?
[09:14:36] <W0rmDr1nk> dandre, its 2014 after all - EMT64/AMD64 have been around for a long time by now
[09:21:58] <dandre> W0rmDr1nk: yes but I have a pretty large (more than 100) servers that can't be upgraded to a 64bit kernel. And I must run my app on them uses mongodb
[11:13:15] <Guest85656> got a best-practive question ... using vb.net (c# driver).... i ahve this complex linq query , that to me is taking too long to query (around 4 seconds) ... i think it can be faster if i use either the aggreation framework or map reduce ....... when creating either of those...cna i store those routines in system.js within mongodb and call it from the webpage ...or should i jsut write it in webpage (i perfer server...jsut wondee
[11:59:44] <stefan_l> I have a setup with two mongo shards, the application ought to only perform only INSERT queries, however monitoring indicates not just INSERTs like expected but also GETMORE and some UPDATE queries. Is this to be expected due to sharding rebalancing or is there some other probable cause?
[12:13:40] <tscanausa> updates will happen to the config database as information moves around. but it could also be your mongo client
[12:13:57] <rspijker> getmore is probably just replication
[12:14:05] <rspijker> but it could be all kinds of stuff, really
[12:14:13] <rspijker> anything that gets results from a cursor
[12:18:04] <stefan_l> OK, thanks! I see updates on the data-shards themselves
[12:18:38] <stefan_l> The application would normally just write data, not read anything
[12:23:01] <__NiC> I'm using mongo 2.4, and when I set keyFile, is any of the replication between my nodes affected by creating users in any of the databases, or is it purely done through the keyfile?
[12:24:44] <__NiC> (or does it only affect my ability to get the nice replica set prompt)
[12:51:55] <__NiC> hm, users being messed up doesn't seem to impact the replica set itself from what I can see
[12:53:12] <__NiC> But I read something about a user having to have read access to config and/or local to be able to add shards or something like that. I didn't quite get how that fit into everything else though..
[13:26:47] <saml> so i get segfault when running mongoimport
[16:15:33] <ArSn> when I do a remove without any query, where does it start to remove? from the lowest ID?
[16:33:34] <d0x> Hi, to keep the jsons in sync with my application i need to update all jsons. currently i implement it like this db.col.findAll().forEach(function(doc){...}). Is there a way to compute the .forEach on all cores?
[16:37:52] <d0x> Is it possible to execute http://docs.mongodb.org/manual/reference/method/cursor.forEach/ in parallel?
[17:29:46] <d0x> Is it possible to execute http://docs.mongodb.org/manual/reference/method/cursor.forEac
[18:14:25] <tscanausa> alex285: modeling questions are generally better when you describe your use case in more detail
[18:16:09] <alex285> tscanausa,Each user has a unique profile. Profile keeps ratings, karma and things that a user has uploaded. It is a download service
[18:16:57] <alex285> tscanausa, I am using "User" only for authedication
[18:20:03] <alex285> right now I am using the second strategy
[18:20:26] <tscanausa> they seem like the same thing and you are adding unneeded complexity
[18:20:28] <alex285> but the 1st is kinda easier, so I want to ask if it is right too
[18:20:55] <alex285> so you reckon to have a single object?
[18:22:18] <alex285> but will user.profile=>artwork will work?
[18:22:38] <tscanausa> i recon you have a single object.
[18:22:39] <alex285> can I embed profile to a user and then connect things to profile?
[18:25:41] <alex285> tscanausa, I dont want to use the single model approach to save me maintaning the authentication system (devise in this case)
[20:12:15] <wiscas> hello guys, I've tried to upgrade a mongdb hdd as it was runing out of space, so I've copied the content from one mountpoint to a new one, but when I'm trying to restart the db I get the following error:
[20:12:15] <wiscas> Wed Jul 2 20:09:20.654 [conn20] xxxxxx.userNonAtomic Deleted record list corrupted in bucket 3, link number 7, invalid link is 6647145:1ef, throwing Fatal Assertion
[20:12:36] <wiscas> I'm not a mongodb expert.. can someone tell me what I'm looking at here and how to fix it?
[20:12:44] <wiscas> where to read about it would be cool too :)