[01:31:46] <aster1sk_> Sweeet Frozenlock, LMK how it goes.
[01:45:57] <Frozenlock> Aaaaaand now mongo won't start. "ERROR: dbpath (/data/db/) does not exist" This is NOT the path I've given it in etc/mongo.conf.
[01:46:48] <aster1sk_> Hmm what's ps aux | grep mongo
[01:46:54] <aster1sk_> If there's a -f flag, shit's broke.
[01:47:09] <aster1sk_> Meaning a -f that doesn't point to /etc/mongo*
[03:17:44] <Skillachie> if i replace tempFiles += values[i].num_files; with tempFiles += 1 here it works. Why is it not taking the value of num_files in the emit ?
[03:25:24] <aster1sk_> Are you using replication sets?
[03:25:34] <aster1sk_> Are you using read preference.
[03:25:54] <aster1sk_> If so drop M/R and use Aggregation Framework.
[03:26:06] <aster1sk_> M/R is a last ditch effort to get the things you want.
[03:27:58] <Skillachie> i am not using replication sets jet
[03:28:17] <aster1sk_> Well going to eat pizza and watch tv for a bit.
[03:47:51] <Skillachie> aster1sk_: I got it figured out. Seemed i just made a mistake with the variable the emit and the reduce need to use the exact same variable name.
[04:23:56] <elux> i just upgraded mongodb but i cant seem to start it now.. in the logs i see: exception in initAndListen std::exception: locale::facet::_S_create_c_locale name not valid, terminating
[09:42:26] <dEPy> I have a chat server with chat channels, and the messages send to channel are logged in mongodb.
[09:42:46] <dEPy> Shoul I put the messages in format where I specify the channel as a message attribute
[09:42:54] <dEPy> or should I make a collection for each channel?
[09:43:24] <dEPy> keep in mind that those collections would be never deleted (i need messagse to resolve disputes with clients)
[09:45:06] <dEPy> I just wanna know if there's a limit to how many collections you can have. Cause there's gonna be tens of thousand of channels. It's 1 on 1 chat and for each 1 on 1 chat 1 channel is created for the duration of that chat
[09:52:38] <christkv> dEPy: I would avoid using a collection pr chat as it will make it harder for you to shard later if you need to scale up. there is also a hard limit of about 22K collections pr db unless you change the --nssize parameter on mongod to be bigger than 16MB as all the namespaces for the collections are stored in the namespace file.
[09:59:15] <dEPy> another thing I could do, is optimize mongo for writes, and put the chat history in redis and read from there, and once the chat is finished remove from redis
[10:00:03] <Derick> this sounds like premature optimisation :-)
[10:00:06] <dEPy> cause the logs from mongo will be read only in case of client disputes, which i think will be rare and even then, WE will read the logs. That means one db read every few days maybe.
[10:00:43] <dEPy> It sounds this way because it probably is. :) I'm just a optimization junkie :D
[10:01:23] <dEPy> But you're right. I won't complicate things for now. I can easily add redis caching if we run into performance problems.
[11:45:59] <brauliobo> hello all, i'm using mongo in a rails app and mongodump/mongorestore for backups. i have multiple collections referencing one another with ids, but when i use mongorestore ids changes and so references are lost. how can I preserve them?
[11:46:52] <quazimodo> long story short, is mongodb with rails easy or hard
[13:34:48] <rh1n0> I have setup a replication set with 2 nodes. Everything seems to be working but i am curious how to configure connections from our application. Should i be using a loadbalancer or shared ip so if one box fails the application can still reach mongo? thanks
[13:38:49] <ankakusu> I'm using node.js and I want to store image files to mongodb database.
[14:33:40] <skrammy> is it possible to name a map reduce job so i can identify it in the rest interface/console
[14:34:08] <skrammy> right now i have long running ones that sometime are still running when the cron job starts a new one. the solution is for me to figure out how to do incremental map reduce but i want to figure this outb efore i implement that. any ideas?
[14:51:16] <ankakusu> Gargoyle, it can be a image, audio or video in my case.
[14:52:53] <ankakusu> actually, if I can manage to insert just one single image file, I will try to insert bigger files.
[14:53:01] <Gargoyle> We are currently just storing images, flash and audio files in a collection using the binary data type. Under the understanding that we are limited by the 16mb doc limit on mongo 2.0
[14:53:37] <christkv> Gargoyle: yeah you want to use gridfs
[14:53:50] <christkv> also as it will chunk up the file
[14:54:21] <skrammy> just repeating in case anyone didnt see: is it possible to name a map reduce job so i can identify it in the rest interface/console
[14:54:22] <Gargoyle> Not much point unless you are talking getting close to the 16mb limit
[15:04:15] <quazimodo> i need to do some practical tutorials
[15:04:53] <quazimodo> like an account (business name) with user(s) and each has his/her own data, the business has categories of 'things' that the users can create/edit/share, etc
[15:05:39] <quazimodo> its not a work project so it'd be a great time to learn
[15:05:55] <christkv> if the things are very varied mongodb makes sense
[15:06:35] <christkv> I usually say one of the benefits of mongodb comes from that documents are closer to objects
[15:06:37] <quazimodo> sure, i get the feeling for extremely rigid, non changing, pure data then relational is probably better
[15:07:10] <christkv> relational is better if the problem domain is strictly relational
[15:07:18] <christkv> most webapps seem to fall somewhat off that
[16:29:25] <bhosie> I'm RTM on sharding. i've also watched the case study videos for Foursquare and Craigslist. The general consensus seems that you should plan ahead by choosing a shard key at start rather than waiting until the day you may need to shard. If i'm using ec2 how big would a collection need to get before i would benefit from sharding and if i never get that big is there a drawback to setting a shard key anyway?
[17:42:19] <alaska> if my find clause references two keys for concurrency issues (let's say: db.collection.find({ name: 'foo', revision: 1}) and the 'name' key is already uniquely indexed, do I need to bother indexing the 'revision' key as well?
[17:43:03] <alaska> Basically, I'm combining this with upsert to ensure that I don't update anything in an entry that's been touched while I was looking at it client-side.
[17:44:06] <alaska> Should I use a compound index? Would it make a difference?
[18:05:21] <alaska> Looks like it's to the mailing list.
[19:08:33] <arussel> is there a way to do a groupBy ? (get me all the document grouped by date
[19:21:48] <timeturner> does the Date type mean that mongo just shows the timestamp in a "Date" format but it actually just have the unix timestamp number? like 23509702384 or whatever
[19:22:04] <timeturner> otherwise it looks like a big waste of space
[20:06:19] <Almindor> well I have a rather complicated map/reduce wrapped in a mongodb/function, I'd like to just call that function from within node.js and just get the results
[20:06:26] <timeturner> you would for sure have to drop down to the native driver for that
[20:06:43] <Almindor> yeah mongoose is too much about tables
[20:07:44] <Almindor> I just don't see a way to actually tell the driver "here's a function, put it on the db" or "call this function, give me the result object"
[20:10:33] <jmar777> Almindor: liked a stored procedure?
[20:13:33] <jmar777> Almindor: although that won't let you store down a full m/r operation. you can just encapsulate some of the logic in functions there. just note that mongodb advises against using this approach
[20:24:25] <Almindor> jmar777: no I don't need it stored, I'd just like to be able to define and call a function (mongoside) from a driver not a console
[20:27:36] <Almindor> found this benbuckman.net/tech/12/06/understanding-mapreduce-mongodb-nodejs-php-and-drupal so I guess I'm going to use the mongoose mapreduce directly
[20:27:51] <Almindor> well it's basically just mongodbdriver mapreduce
[21:04:56] <bagvendt> Hi guys, Would this be the right place to get some help for map reduce and/or distinct?
[21:38:24] <bagvendt> Any mapReduce wizards that want to help me out?
[21:40:29] <LouisT> if i'm using db.pastes.find().length();, how could i sort by unique by say 'User'?
[23:43:35] <chetane> Anyone familiar with building queries in Mongoose? I'm starting to learn and have a few questions
[23:44:35] <chetane> Specifically, in the following example: Person.find({ occupation: /host/ }) .where('name.last').equals('Ghost') -- Setting "occupation" in find(), is it equivalent to doing where("occupation").equals("host")?
[23:47:07] <quazimodo> is there something like sqlite for mongo