PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 14th of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:08:03] <_m> wzlwzl: Systems guy says he brings up a new node, gets the data in sync, then adds that node as the master
[00:08:09] <_m> Because it's faster.
[00:08:21] <wzlwzl> yea.. i dont have any data
[00:08:23] <wzlwzl> they're all empty
[00:08:26] <wzlwzl> just starting over
[00:08:31] <wzlwzl> then will restore
[00:08:35] <wzlwzl> (hopefully)
[00:09:00] <_m> Yeah. Bring up one node in single-mode
[00:09:05] <_m> Restore the data
[00:09:22] <_m> Then make it primary and add the empty secondary (which will be synced automagically)
[00:09:50] <_m> He says that's the easiest/most painless way. Obviously, YMMV. Sorry I couldn't provide more information than that.
[00:17:41] <aboudreault> _m, in mongodb, for my example use case, would you create one collection per user?
[00:17:52] <aboudreault> or a collection can be big enough
[00:20:47] <_m> aboudreault: A single collection with multiple user documents
[00:21:23] <aboudreault> this is a feature of mongodb? haven't see it yet. Currently reading the doc online
[00:21:50] <_m> Think of a collection as a "table" within a relational DB
[00:22:54] <aboudreault> yeah, that's how I see it. However, in mondb, I do not have pretty foreign key to link with the user.
[00:22:55] <_m> Each user could then have an array with document information stored. selectors on said array are pretty easy to understand/use and fairly performant.
[00:23:10] <_m> You shouldn't need one, tbh
[00:23:17] <aboudreault> ok, cool
[00:24:17] <Vile> btw, foreign keys do work even without being declared as such
[00:24:28] <_m> ^this
[00:24:48] <_m> http://www.mongodb.org/display/DOCS/Dot+Notation+%28Reaching+into+Objects%29
[00:24:55] <Vile> just have to make sure that you take care of consistency
[00:25:14] <_m> Should give you more insight on how arrays within a document work. Which should provide some insight into document design
[00:25:25] <Vile> 16mb limit
[00:25:27] <_m> Whether you're storing filesystem paths, s3 keys, etc.
[00:25:59] <_m> I would be wary of serializing a binary data structure into mongo.
[00:26:47] <Vile> why?
[00:27:29] <_m> IMO, there are better approaches. That's purely my view though.
[00:27:43] <Vile> gridfs?
[00:27:48] <_m> s3
[00:28:23] <Vile> s3 is somewhere.
[00:28:24] <_m> I've never had a chance to use gridFS in production
[00:28:56] <Vile> not in your server room (which might have no internet btw)
[00:29:01] <aboudreault> _m, I see ok... any string or value can implicitly be a foreign key if we want
[00:29:15] <_m> aboudreault: Need to head-down for a bit. Will try to answer more questions in a bit.
[00:30:00] <_m> Vile: In cases where I want to store user documents for their retrieval, having no internets is going to mean a lack of my service.
[00:30:37] <_m> Also, I don't have a DC unless you count our Rackspace Cloud as a DC (i wouldn't recommend their product, btw)
[00:30:41] <Vile> _m: there are big market for intranet solutions %)
[00:31:04] <aboudreault> Vile, you use GridFS in production?
[00:31:09] <Vile> aboudreault: don't forget to create index on those
[00:31:54] <Vile> for user-uploaded things. so far - no problems
[00:32:02] <aboudreault> Vile, yes, very important. I also test my query execution time when developing. This is the same thing for mongodb with explain.
[00:32:08] <_m> As I stated before, *my* use cases and experience tend to lean toward "this is easier to let s3 handle." I can see the usefulness of other techniques and would implement them if my stack leaned to that use-case
[00:32:42] <aboudreault> Vile, that's exactly what I need. user uploaded file. video + images. So GridFS was very nice. still need to read the doc though
[00:32:57] <Vile> good night. /me gone to bed
[00:33:11] <aboudreault> Vile, bye
[00:33:49] <aboudreault> _m, yeah, S3 is probably a very good solution too. never worked with it. You probably just put a file unique uuid in your mongo documents?
[00:34:33] <_m> Basically, yeah.
[00:35:00] <aboudreault> and don't have any server, space disk to maintain. so yeah that something to think about.
[00:36:26] <_m> "key"=>"users/foo/1b8c2913e19793cc1b2e970d8bb388999b8f27a2.jpg"
[00:36:46] <_m> Is what I generally store. It's pretty simple, really.
[00:41:16] <aboudreault> _m, yeah
[00:41:24] <aboudreault> this look ruby :S
[00:43:00] <aboudreault> gtg, see you later. and thanks a lot for your help.
[00:45:04] <_m> aboudreault: You're welcome. Good luck with your project!
[01:43:22] <skiz__> I'm trying to set up a single shard on 1 machine with a router/config set up on another (for testing purposes) the config/router seems to be working fine, but when I attempt to add a shard ( which has shardsvr=true ) I get "errmsg" : "couldn't connect to new shard mongos connectionpool: connect failed
[01:43:52] <skiz__> there are no firewall issues, using admin db via the mongos, and I can connect to it remotely on the mongos shard port
[01:43:59] <skiz__> anything I may have missed?
[01:46:22] <Dr{Who}> we tried phpmoadmin and it ended up modifying a lot of our collections and we a fun day of fixing stuff. Does anyone know if RockMongo has this same problem?
[02:44:39] <ojon_> maybe some1 familiar with mongoskin odm for nodejs can answer my question:
[02:44:52] <ojon_> http://stackoverflow.com/questions/12417425/efficiency-of-mongodb-mongoskin-access-by-multiple-modules-approach
[02:52:49] <skiz__> bind fail
[03:01:55] <IQrow> Hello world!
[03:02:17] <retran> herroo IQrow
[03:02:53] <IQrow> Thanks for the welcome, I'm only here for just a second, was making sure this little chat room existed.
[03:03:07] <IQrow> Will be back in the future I'm sure, if not just to lurk
[03:03:15] <IQrow> Farewell for now!
[03:14:16] <CannedCorn> hey guys would appreciate any feedback people have to this: http://tylerbrock.github.com/mongo-hacker/
[03:14:37] <CannedCorn> attempting to make the shell more usable, any good ideas will eventually make it into the shell proper
[04:22:04] <Gavilan2> hi! Is there any way to simulate transactions or some kind of atomic things across different documents?
[04:27:02] <retran> Gavilan, http://www.mongodb.org/display/DOCS/Atomic+Operations
[04:27:15] <retran> forget 'simulation'
[04:27:18] <retran> just do it
[04:30:58] <Gavilan2> but it says it's just for single documents...
[04:49:56] <tomlikestorock> is there a way to remove a member of a replica set without incurring the remaining members being resynced to?
[07:10:24] <dhilip> i'm new user to mongodb
[07:10:50] <dhilip> can anyone suggest me where to start
[07:19:20] <_m> http://www.mongodb.org/display/DOCS/Tutorial is probably as good as any
[07:19:27] <kali> start by staying a few more seconds on irc after asking a question
[07:19:49] <kali> *sigh*
[07:19:59] <_m> http://tutorial.mongly.com/tutorial/index
[07:20:28] <_m> Did he /part? I have those messages yanked.
[07:20:30] <kali> _m: too late, he's been gone
[07:20:32] <_m> kali: le sigh
[07:20:39] <kali> for... ages.
[07:22:01] <_m> How the beards have grown in such time.
[07:37:25] <[AD]Turbo> hi there
[07:37:43] <_m> 'ello
[10:22:24] <gigo1980> how can i make repareDatabase on an slave node, that it does not block the hole system ?
[11:12:48] <kali> gigo1980: IIRC, a repair on a slave switch it to recovery mode, so the queries are routed to other nodes
[11:14:04] <kali> i'm wrong.
[11:14:27] <kali> gigo1980: http://www.mongodb.org/display/DOCS/Durability+and+Repair
[11:15:00] <kali> gigo1980: you need to get stop it, change port and discard replSet, run repair
[11:16:20] <kali> gigo1980: it might be more practical to just ditch the secondary data and let it do a full sync (this is how we deal with broken secondaries here)
[12:04:56] <gigo1980> kalli: thx that way works fine
[12:08:18] <typecast> there is some unexpected behavior in one step of my aggregation pipeline that I don't understand. I hope you can give me some pointers on this one
[12:08:42] <typecast> mongo is used with the pymongo driver and the expression in the pipeline is
[12:08:47] <typecast> { '$project': { 'bucket': { '$divide' : [ "$first", int(self.bucket_interval) ] }
[12:08:55] <typecast> $first contains an integer
[12:09:18] <typecast> so what I would expect from the documentation is that bucket is also an integer
[12:09:25] <typecast> at least, that's what I read from http://mongodb.onconfluence.com/pages/viewpage.action?pageId=38207860#AggregationFramework-ExpressionReference-ArithmeticOperators
[12:09:40] <typecast> is that assumption not correct?
[12:09:49] <typecast> (because what I get is a float)
[12:11:16] <jmar777> typecast: what makes you expect to get an integer? i don't see that specified in the documentation?
[12:11:38] <jmar777> "takes an array containing a pair of numbers and returns the value of the first number divided by the second number."
[12:11:39] <typecast> jmar: there's the table of expression
[12:12:04] <typecast> well, but that table is gone in the newest version of the documentation
[12:12:35] <jmar777> typecast: ahh, i see where you're referring to though
[12:15:13] <typecast> the problem is: they need to be integers
[12:15:47] <typecast> so, I'm now wondering whether this is normal behaviour, a bug in mongo or a bug in pymongo
[12:16:10] <typecast> but I have no idea how to proceed from here
[12:20:18] <Gargoyle> is passing safe = true as an option to update() using $set the same as a full document update? eg. makes sure that there are no errors at the expense of application speed?
[12:22:41] <NodeX> safe just means it will write to X nodes before returning
[12:23:01] <Gargoyle> OK. Related q.
[12:23:06] <NodeX> I think the default is at least 1 node but you can configure that... if you're in a 1 node system it will sync it to disk
[12:23:33] <NodeX> (It posisbly syncs to disk in a multi node system on at least one node too I would imagine
[12:24:01] <Gargoyle> if two lines of code call update(blah blah, $set => etc) very quickly, is it possible that they would get run in different order on the server?
[13:10:22] <NodeX> Gargoyle : there is a lock but there is no garuntee which one would get saved / updated first - this depends on the latency
[13:11:12] <Gargoyle> NodeX: Do you know if useing safe would solve that?
[13:12:05] <NodeX> if you need transactions then mongo is probably not right for you
[13:12:58] <NodeX> http://www.mongodb.org/display/DOCS/Atomic+Operations#AtomicOperations-%22UpdateifCurrent%22
[13:13:03] <NodeX> perhaps that can help
[13:17:21] <Gargoyle> It's not really a transaction. issue. Just more I need to think a bit more about my app logic.
[13:53:32] <gigo1980> is it posible that one process write data to an sharded cluster with mongo router a, and an other process will read that data from an different mongo router b. is it posible that there can be an data inconsistent
[14:03:28] <USvER> Hello,
[14:03:47] <USvER> How to check if array contains value
[14:04:38] <termite> x:{$in:[1,2,3]}
[14:04:43] <termite> I believe
[14:05:05] <USvER> For example i have object like {"_id" : "497ce96f395f2f052a494fd4". users: ["Abby", "Leo"]}
[14:05:35] <USvER> how do i check if ducument has Leo in users array
[14:05:37] <termite> USvER: For example i have object like {"_id" : "497ce96f395f2f052a494fd4". users: {$in:["Abby", "Leo"}]}
[14:06:16] <USvER> this is something that opposit to what i ask
[14:06:38] <USvER> it checks that users value in in array i provided
[14:06:56] <USvER> but i want to check if value i provided in users arrray
[14:07:29] <USvER> Am i wrong? becouse i'm totaly lost in this xD
[14:08:53] <USvER> I guess $ne shuld work
[14:09:09] <USvER> but cant find is it working on arrays
[14:09:40] <termite> well can't you just use user.leo in a where clause
[14:10:23] <USvER> hmmm
[14:10:51] <USvER> if username will have "-"?
[14:14:53] <USvER> $ne works
[14:15:12] <USvER> Just not documented
[14:15:38] <termite> {"_id" : "497ce96f395f2f052a494fd4". users.Leo:{$exists:true}}
[14:15:44] <termite> think that will work also
[14:18:41] <USvER> Hm... strange )
[14:18:52] <USvER> $in works too....
[14:19:00] <USvER> totaly confused =\
[14:19:41] <USvER> db.lol.find({users:{$in:["Leo"]}})
[14:19:55] <USvER> works like a charm =\
[14:21:08] <USvER> Sorry for bothering you... Documentation dont mention this
[14:26:38] <jY> http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24ne
[14:29:29] <USvER> Say nothing about arrays
[14:29:57] <USvER> $in too
[14:57:13] <termite> Odd question has anyone know if you can use $in with _id to search for a list of docs
[14:57:23] <kchodorow> yes
[14:57:46] <termite> how do you get it to work I always have to use ObjectId
[14:59:52] <termite> if I use {"_id:{"$in":["23wer342wrewfds", "rfw5wfsfsfssfss"]}} returns an empty set every time(ids are just examples)
[15:03:39] <kchodorow> can you paste a real doc you're try to find? at least its _id?
[15:05:15] <termite> db.targets.find({"_id":{"$in":{["50528097547d7dbc6a000001"]}},"locs.loc":{"$within":{"$center":[[-77.585101,38.283524],100]}}})
[15:06:02] <termite> wrong one
[15:06:24] <termite> db.targets.find({"_id" : {$in:["501d9acd0f33b0475c000000"]},"locs.loc":{"$within":{"$center":[[-77.585101,38.283524],100]}}})
[15:08:41] <termite> I am looking for records in a list that also are within a certain radius of position x,y
[15:11:18] <remonvv> Does anyone know if a cursor can time out while iterating over it? Or does it just time out if getmore is not invoked for a long time?
[15:12:40] <Gargoyle> remonvv: Not seen one timeout, and have had some script that run for hours.
[15:12:50] <remonvv> On a single cursor?
[15:12:58] <Gargoyle> yup
[15:13:03] <remonvv> Cool
[15:13:04] <kchodorow> if it's idle for 10 minutes it'll time out
[15:13:07] <remonvv> Thanks
[15:13:21] <remonvv> Ok, thanks. Just making sure I didn't just commit some dodgy code ;)
[15:13:33] <Gargoyle> But, your iterrator WILL loop more times than the length the the result set
[15:13:48] <kchodorow> termite: what does the document look like
[15:13:50] <kchodorow> ?
[15:13:57] <remonvv> Gargoyle, why?
[15:14:01] <Gargoyle> if you are upading the collection during the loop.
[15:14:03] <Gargoyle> ???
[15:14:05] <Gargoyle> Just does!
[15:14:26] <remonvv> Well yes, but that's expected behaviour.
[15:14:38] <termite> kchodorow: why does that matter I just want to grab the ids
[15:15:00] <kchodorow> termite: you probably want $in:[ObjectId("501d9acd0f33b0475c000000")]
[15:15:24] <kchodorow> but i can't be sure without seeing what type your _id is
[15:15:52] <termite> kchodorow:tried that but the node.js driver just converts it to a string and it fails
[15:16:09] <remonvv> if(timeItTakesTo(ANSWER_QUESTION) <= timeItTakesTo(ASK_WHY_QUESTION_IS_RELEVANT)) answerQuestion();
[15:16:14] <rguillebert> hi
[15:16:20] <termite> kchodorow: thanks for the help will try it again maybe I messed up on it
[15:16:30] <rguillebert> can I have an object as a mapreduce key ?
[15:18:20] <kchodorow> termite: i'm not familiar with the node.js driver, that's just how it would be in the shell
[15:18:26] <kchodorow> maybe ask on the user list
[15:18:52] <kchodorow> rguillebert: yes
[15:19:33] <rguillebert> kchodorow, so if the fields of the objects are identical they will be reduced ?
[15:27:04] <kchodorow> rguillebert: it probably has to be an exact match, i.e., {x:1,y:1} won't be the same as {y:1,x:1}
[15:27:32] <rguillebert> ok, it's not a problem in my case
[15:33:17] <remonvv> kchodorow, i still think that's a bit of a bug btw. It's value equality that matters really.
[15:39:10] <andywdc> Hi Folks!
[15:39:17] <andywdc> This should probably be obvious
[15:39:47] <andywdc> but i have a collection called users, a collection called items, and i want to make a new collection which stores which items a user has - is that how we are meant to do it in mongodb?
[15:42:04] <Vile> andywdc: why would you do that?
[15:42:38] <kchodorow> remonvv: the mapreduce thing?
[15:42:44] <Vile> if you have relatively limited amount of items per user - just create an array of the item _id's for user
[15:44:13] <andywdc> thats where my understanding dies
[15:44:19] <andywdc> i have a bunch of items
[15:44:20] <andywdc> all with an id
[15:44:24] <andywdc> i have a bunch of users, all with ids
[15:44:36] <andywdc> how do i associate an item to a user when they own somethin?
[15:44:39] <andywdc> and store that association
[15:46:10] <termite> andywdc: a relational database like MySql is meant for something like this.
[15:46:56] <andywdc> well presumably you can do it with mongo...i mean what are the typical use cases for mongo in that case?
[15:47:38] <termite> andywdc: As suggested before just create one collection with documents like {user:frank, items: []}
[15:48:17] <andywdc> hmmmmm
[15:48:26] <andywdc> but as the user buys TONS of stuff, that user entry would grow massively
[15:49:02] <termite> andywdc: Yup
[15:49:47] <andywdc> hmmm
[15:49:51] <andywdc> so why do people use mongodb:?
[15:49:55] <termite> andywdc: Mongo is fast because it is document based and not relational. If your data needs require allot of joins then Mongo isn't a good idea
[15:49:58] <andywdc> what use cases
[15:51:08] <termite> andywdc: mongo would be perfect for receipts. Since the information is not going to change
[15:51:08] <andywdc> sooo if i wanted to store all the items in my factory and never do anything els with the data - then mongodb suffices
[15:51:16] <andywdc> but waht complicated use cases does mongo db have!?
[15:52:34] <termite> andywdc: remember that each document has a maximum size so if you have the possibility of infinite orders you are going to slam into that limit
[15:53:50] <termite> andywdc: it's not supposed to be complicated. Using it to store item data for really fast retrieval is a great idea
[15:56:16] <andywdc> are views part of mongodb
[15:56:49] <remonvv> kchodorow, no, the thing that in same situations for MongoDB {x:1, y:1} isn't equal to {y:1, x:1}
[15:57:41] <remonvv> i'm off, nn
[15:58:35] <noordung> Hi! I need some help choosing an appropriate schema design...
[15:59:01] <skiz__> I'm working from http://www.mongodb.org/display/DOCS/Sharding+Limits#ShardingLimits-Stepstoshardanexistingcollection (along with 20 other refs) however #7 never seems to happen by itself. is there an easy way to kick it off?
[15:59:46] <noordung> I need to be able to store large amounts of text in a document, and that text may change significantly between saves. What would be the best way to design the schema for that kind of documents? Should I resort to GridFS?
[16:00:25] <gigo1980> db.foo.copyTo("foo2") blocks hole mongocluster is that correct ?
[16:00:25] <noordung> Large amounts of text = as much as would fit in a document's size limit
[16:02:56] <gigo1980> @noordung : store it in gridfs. there is no limit
[16:03:04] <gigo1980> regular limit is 16mb each document
[16:03:42] <noordung> gigo1980, I'm slightly more concerned on relocations that MongoDB would have to make on saves, rather than the limit...
[16:04:52] <gigo1980> why dont you point in your you mongodocument to the gridfs documents ?
[16:05:57] <noordung> gigo1980, that is an option high on the list, but I was thinking on something more integrated... 16MB should be enough for my 'large' needs... at least initially...
[16:06:14] <noordung> gigo1980, I wouldn't like to do extra queries...
[16:07:28] <noordung> gigo1980, It's just that between writes, Mongo may need to deal with size jumps of megabytes... Say 1MB of text becomes 4MB... Those can be expensive, from what I know...
[16:11:43] <IAD> noordung: look at http://www.mongodb.org/display/DOCS/GridFS
[16:12:36] <noordung> IAD, Just a question... GridFS is pre-configured to avoid size jumps, correct? It uses the 256k chunks?
[16:16:16] <anthezium> hey i have a collection with lotsa documents (like 5mm) and i'm trying to export it to json, but mongoexport seems content to export 0 records when i run it: https://gist.github.com/3722945
[16:17:36] <IAD> noordung: http://sg.php.net/manual/en/mongo.configuration.php#ini.mongo.chunk-size
[16:17:52] <wwilkins> anthezium: try mongodump perhaps? or maybe you can use skip and next in the export query option.
[16:18:12] <noordung> IAD, I see...
[16:18:21] <skiz__> Y U NO MIGRATE? grr
[16:18:22] <anthezium> mongodump will only give me bson, right? how would i use skip and next in this situation?
[16:19:09] <wwilkins> anthezium: I think the export has a size limit, so get it to pump out enough records to get under that limit and loop through the whole collection.
[16:19:20] <anthezium> o word
[16:21:50] <anthezium> wwilkins: how do i bake those into a query? i can only find examples of skip using the js driver
[16:22:48] <IAD> noordung: so, it looks like not important. "mongod will only use the space it really use. There is no need to set " https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/hrRlhOwGWWk
[16:23:06] <NodeX> anthezium : mongodump also takes a -q parameter
[16:23:14] <wwilkins> anthezium: no clue I'm sorry to say, I'm just trying to throw out ideas.
[16:23:53] <anthezium> yeah skip and limit are both driver-level ideas, can't be expressed in query language. query language can only do like $orderby, $hint, $explain, etc.
[16:24:05] <NodeX> but you can't use skip on mongodump
[16:24:14] <anthezium> yeah but mongodump works
[16:24:23] <anthezium> for large collections
[16:24:37] <NodeX> if you have timestamped data just workout the intersects and -q ranges
[16:24:58] <noordung> IAD, Do you know how performant GridFS is for reads/writes?
[16:25:12] <anthezium> i guess i could just use mongodump and figure out how to convert the bson to json
[16:25:15] <anthezium> what a pain
[16:26:14] <NodeX> mongo export takes a --query paramter too
[16:27:05] <anthezium> yes but you can't express skip or limit inside the query
[16:27:12] <anthezium> unless there's some undocumented way to do it
[16:27:37] <IAD> noordung: nginx + nginx-gridfs look good http://www.coffeepowered.net/2010/02/17/serving-files-out-of-gridfs/
[16:29:00] <NodeX> [17:23:24] <NodeX> if you have timestamped data just workout the intersects and -q ranges
[16:29:12] <NodeX> ;)
[16:29:27] <noordung> IAD, just so I'm in the clear, I can modify GridFS files, right?
[16:33:34] <noordung> IAD, I see now that you can, and it creates versions... :)
[16:36:30] <noordung> Since GridFS can be used as a versioned filestore, how does it handle different versions? Are chunks reused?
[16:52:26] <anthezium> NodeX: come on that is an insane kludge
[16:52:28] <anthezium> :)
[16:57:04] <skiz__> can someone give me a hand trying to kick off existing data on a shard to split to another. the chunks look good, but everything is still on the first shard. how can I get it to begin migrating?
[19:48:22] <mgriffin> why does mongo (on EPEL at least) depend on libpcap?
[19:55:15] <mgriffin> ah mongosniff
[19:55:54] <R-66Y> is there a way to update every element of an array in a document in one query?
[20:06:40] <jiffe98> I've setup a new instance of mongodb on a server, when I connect to it via --host of 127.0.0.1 or its IP it works fine but if I connect to it by hostname it times out
[20:06:51] <jiffe98> dns is working fine on that machine, I can resolve the hostname locally
[20:08:25] <skiz__> jiffe98: check the bind address in the config
[20:08:33] <jiffe98> skiz__: its 0.0.0.0
[20:08:41] <jiffe98> and confirmed with netstat
[20:13:13] <bpoppa> hey guys
[20:17:53] <jiffe98> I have 2 machines that I am eventually going to replicate between, one works the other does not and I'm not seeing a difference
[20:20:07] <jiffe98> hmm, I can connect to it by hostname from another machine
[20:20:25] <jiffe98> so it sounds like a dns issue but dns works fine locally
[20:23:36] <jiffe98> I can also connect to the other machine from the non-working machine, I just can't connect to the local instance of mongodb
[21:09:14] <g-hennux> hi!
[21:09:52] <g-hennux> is there a plausible reason why two of my three mongodb users can't authenticate any more from one day to the next?
[21:10:48] <g-hennux> the log says "auth: couldn't find user admin" – this worked fine just a couple of days ago
[21:13:48] <noordung> GridFS isn't actually something *native* (per-se) to MongoDB, it is just an API over the actual MongoDB documents and collections, correct?
[21:16:59] <g-hennux> ok, apparently it's only mongodump that's not working
[21:22:02] <g-hennux> ok, if it's an r/w user, that works; if it's r/o, it doesn't
[21:22:06] <g-hennux> hmpf
[21:22:15] <g-hennux> at least since three days, used to work before...
[21:24:18] <kchodorow> noordung: correct
[21:24:46] <noordung> kchodorow, so nothing is stopping me from implementing my own version of a GridFS-like system... :)
[21:31:25] <tomlikestorock> I'm trying to add a new member to my replset, and I can't successfully do it. I keep seeing this in the logs: ERROR: error processing ttl for db: mydbname 10065 invalid parameter: expected an object ()
[21:32:50] <tomlikestorock> also this: auth: couldn't find user myuser, mydbname.system.users
[21:39:25] <kchodorow> noordung: nothing at all :)
[21:39:36] <noordung> kchodorow, great then! :)
[21:39:42] <tomlikestorock> I guess I don't fully understand the procedure for adding members to the replica sets with auth turned on. :(
[21:39:52] <kchodorow> tomlikestorock: what are you running to add it?
[21:40:07] <tomlikestorock> rs.add
[21:40:14] <kchodorow> yes... what are you passing it
[21:40:16] <kchodorow> ?
[21:40:54] <tomlikestorock> I set the config to use auth, use the key file, and set the replSet. Then I bring up mongo on the new box. Hop over to the primary, and run rs.add("newhostname:27017")
[21:41:54] <tomlikestorock> before I turn on auth, I add my admin user to the system
[21:42:37] <kchodorow> tomlikestorock: can you pastebin the errmsg from running rs.add and the log from the primary?
[21:43:16] <tomlikestorock> uh, hm
[21:43:22] <tomlikestorock> guess I just had to wait? It's syncing now?
[21:45:26] <doubletap> i cant seem to connect to a remote mongod instance using mongo
[21:45:50] <tomlikestorock> kchodorow: just to be clear, I don't need to add any other users to my new replica box when I want to add it to the set, right? I just add my admin user for my own purposes, then go to the primary and say to add?
[21:46:17] <doubletap> i use the form as described in the documentation but i get the errors that the options i am using (-u, -p, or —username, —password) are not valid.
[21:46:26] <doubletap> is there something i am missing here?
[21:46:46] <doubletap> i have the latest version of mongodb as of today.
[21:51:45] <doubletap> the error i get is "unrecognized option `--username'"
[21:53:27] <doubletap> it is odd because the docs have options that mongo does not show me when i just type "mongo"
[21:53:45] <doubletap> notably, username and password are missing.
[21:54:08] <doubletap> is there a reason my instance of mongo would be missing those options?
[21:56:34] <doubletap> anyone know why my instance of mongo has different options than what is in the docs?
[21:59:02] <jiffe98> anyone know why I could connect to a mongodb server via 127.0.0.1 and the external IP but not hostname rather than a dns problem?
[21:59:25] <jiffe98> I can resolve the hostname fine from the local machine but it times out when trying to connect
[22:25:51] <jiffe98> alright, I can connect if I pass '--norc --nodb'
[22:26:08] <jiffe98> but then show dbs gives me 'Fri Sep 14 16:25:46 ReferenceError: db is not defined src/mongo/shell/utils.js:1475'
[22:29:42] <jiffe98> it works with just --nodb also, but times out otherwise
[22:52:31] <kchodorow> tomlikestorock: yeah, it doesn't need any other users
[22:53:02] <kchodorow> jiffe98: did you pass --bind_ip as an option when you started mongod?
[23:43:55] <statim> is it possible to do a query using the properties of the document itself? for example, a document with cleared_at: 1234, last_message_at: 5678, and id want to run a query returning documents that have cleared_at < last_message_at