[00:54:57] <tpayne> Hello. I'm trying to setup mongo to support auth. When creating a user with readWrite access that my client uses, do i create it on the admin database or on the actual database?
[03:54:51] <langemann> root -> sub -> sub -> some data
[03:55:09] <joannac> how many categories can there be?
[03:55:35] <langemann> As many as the system can handle. I really don't know, 100-200 perhaps.
[03:55:49] <joannac> You might hit the maximum document size limit, then
[03:56:14] <langemann> I originally had them all split up in 4 different collections, tying them together using a relation-key which was set as their parent objectID - better solution?
[03:56:32] <joannac> depends how you're using them
[03:56:55] <joannac> if you're retrieving a category, do you need to see all the subcategories
[03:57:34] <langemann> No. The categories are mainly used for navigation in a toolbar
[03:58:10] <langemann> Also, when a user wants to add new subcategories, I retrieve them
[04:00:41] <langemann> I also was thinking about just having one collection and differentiating on the category-name, but I think that would give to little room to filter.
[04:02:54] <joannac> I suspect you'll hit https://jira.mongodb.org/browse/SERVER-831 at some stage
[08:08:31] <arussel> when I install the package from 10gen, I have this line: daemon --user "$MONGO_USER" $NUMACTL $mongod $OPTIONS in the start function of /etc/init.d/mongod
[08:11:55] <faved> hey, quick question. if you have a data field that you want to set a TTL on using expireAfterSeconds. is there a way that you can roll that expire, so say its set to expire after 60 seconds, js there a call i can make @ 50 seconds that will reset it to 60 ?
[08:17:35] <ollivera> Does MongoDB support automatic cross datacenter replication?
[08:17:54] <Nodex> ollivera : you control your replication
[08:18:30] <Nodex> if you want your writes to scale to X/Y/Z then you can add that to your query or (I think) set it up as a default on your replica set
[08:24:50] <ollivera> Nodex, Okay, so it is possible to configure MongoDB so that if my primary data center goes down. I'll have automatic election of a new primary and failover. That's my question :)
[08:27:54] <Nodex> I suggest you read the doocumentation, it's all explained in there
[08:32:40] <arussel> this is in the starting script of the yum package:
[08:32:55] <arussel> how does it expect a value for both dbpath and pidfile ?
[08:33:44] <Berg> hello im trying to sort my data using pymongo i have been able to sort player profiles into alphabetical list but now i need to limit that list to 20 profiles so as to put it in a web page sortlist=db.users.find().sort(keyword, method) that is the sort method
[08:40:26] <arussel> the 'daemon' command is not installed on my machine (aws ami), which package should provide it ?
[09:13:31] <mylord> starting out, is it a decent or bad decision, generally, to prefer databases over collections? ie, have only 1 collection per database, and have about 10 dbs, for concepts like tourmanets, scores, users, winners, turnyResults, etc
[09:13:52] <mylord> would it be bad if each woudl be separate database, instead of collection?
[09:14:41] <mylord> initially i thought since each database has its own write lock, it might be useful to have separate databases for performance, at least while data is small and possibly mostly in RAM?
[09:15:04] <mylord> and then have the big data, espcially, dbs, in separate dbs?
[09:15:50] <eklavya> one of my queries is not having any effect
[09:16:03] <eklavya> so I started mongod with logging enabled
[10:42:30] <k_sze[work]> Hello. I suspect that one of the members in our replica set ran out of disk space while trying to synchronize data over.
[10:42:50] <k_sze[work]> How can I verify that that is the case?
[10:43:12] <k_sze[work]> (and what can I do without losing data?)
[10:43:54] <k_sze[work]> Can I just shut down that member, add more disk space and then start that member again?
[10:54:12] <aberrios> mongoNoob here. If I have a collection "jobs" and the documents have a field "status". And I have a process that grabs all jobs that have status "waiting" and updates the job to have a field "foo" = "bar" and "status" = "active". I'm worried that as the jobs collection grows this process is just going to get slowler and slower. How would others approach this problem? I suppose in a RDB i would have a table live_jobs and once the process has run move t
[11:01:30] <slikts> is there a way to set a callback for when a specific field is changed in mongoose?
[11:01:44] <slikts> I want to hash the password field before saving, but not if it's already hashed
[11:02:34] <slikts> I currently check for .isNew and hash the field then, but I have to remember to do it manually when updating it
[11:45:43] <kali> mxck: it's a good idea to start with the driver before stacking layers and layers on top of it
[11:47:00] <kali> mxck: adaptation between a scripting language like python and mongodb is less painfull than other combination (sql vs anything, or mongodb with a strongly typed language)
[12:14:26] <mylord> kali, in case of file-system usage, will database be much slower than collection, or negligable slower, or?
[12:16:01] <kali> slikts: well, at least you got some understanding of what was mongodb... people going straight for the ODM are so lost when the ODM fails them...
[12:16:03] <Nodex> slikts : you really should learn the raw uery language a little before using an ORM
[14:31:16] <asturel> on the primary i get this now for the secondary on rs.status "lastHeartbeatMessage" : "rollback 2 error findcommonpoint waiting a while before trying again"
[14:31:41] <asturel> if i would the db will it fix it?
[14:32:12] <asturel> but i cant drop Mon Apr 7 16:31:43 uncaught exception: drop failed: { "errmsg" : "not master", "ok" : 0 }
[14:35:56] <asturel> only way is to delete the dbpath?
[14:36:29] <asturel> bah i did it but i get "lastHeartbeatMessage" : "rollback 2 error findcommonpoint waiting a while before trying again"
[14:39:40] <amitprakash> Hi, pymongo throws a InvalidDocument: Cannot encode object: {'sd': {}} .. How do I resolve this?
[14:49:33] <rkgarcia> amitprakash: maybe the single quote, replace with "
[15:29:51] <ShortWave> if I just wanted to start building a driver, would it be better to A: Reverse engineer existing drivers? Or B. start from scratch.
[15:33:53] <cheeser> what i'm doing for my kotlin driver is wrapping around the 3.0 java driver core.
[15:34:13] <cheeser> so my kotlin driver exposes a kotliny API but delegates underneath to the java driver.
[15:34:27] <cheeser> much less work than writing your own driver.
[15:34:28] <ShortWave> no matter what I do, I have to write some kind of boilerplate translation logic to go from lua to the host application's networking interfaces
[15:35:31] <skot> lua is not jvm based. it is a c-style language and there are drivers already: https://github.com/moai/luamongo
[15:37:25] <skot> There may be interpreters in many languages but the main compiler creates machine code like c/c++/etc.
[15:37:29] <skot> Here is a list of impls: http://lua-users.org/wiki/LuaImplementations
[15:42:02] <cheeser> hrm. i thought lua could run on the jvm. cloudera has a java app that use lua for configs/commands.
[15:53:02] <skot> Yep, there are interpreters out there, and the main distro has one too which is machine compiled.
[15:53:41] <cheeser> makes sense. i think the unreal engine uses lua so they'd need/want that native code in any case.
[15:54:09] <skot> It does well in a "scripting" environment for things like that, and games (which is generally c/c++)
[15:57:48] <saml> select distict status from tb; how do I do that?
[15:58:28] <rkgarcia> saml: use aggregation framework
[17:04:23] <ShortWave> if I'm using $match, can I use $not?
[17:04:29] <ShortWave> It claims to be an aggregation operator...
[17:04:55] <ShortWave> or should I use $match + $ne?
[17:45:35] <Danny_Joris> I added replSet = rs0 and oplogSize = 100 to my mongodb.conf file in ubuntu. Though when I log in I don’t think it’s working.
[17:45:55] <Danny_Joris> when checking db.getCollectionNames(), I get this error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:128
[17:46:22] <Danny_Joris> on stackoverflow I read I should use rs.slaveOk() but that doesn’t help
[18:02:47] <_sri> it's funny how 2.6.1 has a planned release date already
[18:41:35] <mboman> I have an issue when I try to download a file from GridFS using python: The way I do it stores the whole lot in memory before writing to file and I need a more memory-efficient way to do it..
[18:43:59] <mboman> They way i do it now: https://github.com/mboman/vxcage-jobs/blob/master/utils.py : the get_file() function...
[18:46:16] <mboman> Ah.. Think I found it.. Need to supply size parameter to read() and iterate
[19:17:48] <ShortWave> I'm getting an error I don't understand.
[19:24:33] <mboman> ShortWave, and the error message is?
[19:32:05] <in_deep_thought> should starting mongodb be different on arch linux? I try mongod and it gives me ERROR: dbpath (/data/db/) does not exist. but then according to the arch linux wiki I try sudo systemctl start mongodb and it works
[19:32:11] <asturel> if i have a document like { something:{something2="x"}} how can i find it by something2 value?
[19:39:59] <in_deep_thought> I get Mon Apr 7 12:37:48.119 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145 when I try and connect to the shell with mongo. The start command ran without error. I can't tell what is going on in this error aas it doesn't give me much info
[19:41:31] <mboman> ShortWave, Sorry, I have not been using geolocation with MongoDB before :(
[20:03:13] <in_deep_thought> I would ask this on the mongoose irc but there doesn't seem to be anyone there: when I use the mongoose.connect() command in node does that switch the current mongo database to the one I specify, or do I need to manually switch it first with the mongo shell?
[20:06:17] <in_deep_thought> it looks like I had to manually switch it
[20:13:13] <asturel> could mongodb run on an amazon aws free tier?
[20:13:54] <asturel> basicly mostly insert like 1 insert / s
[20:16:20] <Joeskyyy> is it possible? Yes, should you do it, absolutely not lol
[20:18:07] <Joeskyyy> You'd run out of memory all the time, and it'd be so terribly slow.
[20:18:21] <Joeskyyy> Depending on what you're storing you may want to try a different datastore, like Redis or something
[20:19:00] <Joeskyyy> Redis could be a problem too though, since that's all in mem, just depends on what you're doing again
[20:22:30] <BlakeRG> Hello all, i have a collection of documents with a 'birthday' field in each, whats the best way to pull out a list of age ranges for those documents if the date format is something like '01/10/1979'
[20:24:17] <Joeskyyy> You should probbbbbably use Date() instead to make those fields.
[20:24:27] <Joeskyyy> Then you can sort, query, etc.
[20:24:56] <BlakeRG> Joeskyyy: yeah, i realize that now.. but what kind of query would i want to run against the collection to pull the data i need out at this point
[20:25:29] <Joeskyyy> You'd need to write a script to use "/" as the separator, then sort on each block, would be how I would approach it
[20:25:46] <Joeskyyy> Since it's a string, you're kind of limited in what you can do
[20:26:08] <BlakeRG> Joeskyyy: are we talking group? map/reduce?
[20:26:54] <Joeskyyy> I'd imagine there's some JS way of doing it, but I'm no mighty JS programmer. I'd prolly look to awk/python to help me with that.
[20:27:00] <Joeskyyy> But that's just my tastebuds
[20:31:34] <BlakeRG> thanks Joeskyyy will go that route, not too hard but i thought it might be easier
[20:32:49] <pasichnyk> hey, i'm having some issues detecting if an Update() with $addToSet actually inserted a record or if the record already existed. I'm using WriteConcern "Acknowledged" and after it finished checking the DocumentsAffected and "UpdatedExisting" properties (c# driver). However if my processing errors, and restarts, i'm getting duplicate downstream processing performed that should only happen if
[20:32:49] <pasichnyk> a new item in inserted with $addToSet. Thoughts?
[20:47:42] <asturel> Joeskyyy1 then what about 512MB ram+1 core cheap do vps?:
[20:52:49] <slikts> am I right in thinking that the _id field will be indexed even if I do something like _id: String?
[21:31:07] <TylerE> If I recreate a collection using $out in an aggregate pipeline, which recreates the colleciton atomically, do any indexes stick around?
[21:43:10] <TylerE> how can I splat out a date in an aggregation project
[21:43:27] <TylerE> basically i have full precision timestamps and want Y-M-D hh:00
[21:43:48] <TylerE> I tried using concat and the date operators ($year etc) but $concat refuses anythign that isn't a string
[21:43:55] <TylerE> and the date ops return integers
[21:44:56] <hydrajump> hi is it possible to access the material /videos from this course https://university.mongodb.com/courses/10gen/M101JS/2014_March/about all in one go, e.g. before the course is complete?
[22:17:04] <asturel> i keep getting this: "lastHeartbeatMessage" : "rollback 2 error RS101 reached beginning of local oplog [2]" any idea?
[23:05:23] <joannac> asturel: looks like you fell off the oplog
[23:12:56] <andrewfree> Are there plans to add back the functionality of the identity_map for mongoid in rails 4