PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 30th of May, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:31] <foxer_> Hello guys, I boast of Ramnode, my vps makes 1.3GB per second. For quite some time I am their customer and for me this is the best hosting company! I'll be glad if someone signs up from my referral link and become a happy customer of Ramnode! https://clientarea.ramnode.com/aff.php?aff=414 So, performance for mongodb is extremely important!
[00:01:59] <scottbessler> does a rs node that is in RECOVERING accept writes? (e.g. would wc majority fail if i have 1 primary and 2 recovering) ?
[01:02:15] <sinusss> hi. will I have issues if I delete all (children), then re-adding them instead of updating each child?
[01:33:44] <konr> What's currently the best web interface for mongodb?
[01:34:26] <sinusss> konr i am using rockmongo
[01:34:45] <sinusss> works nicely... I can CTRL + F on items and it feels like phpmyadmin (i like phpmyadmin)
[01:36:48] <konr> sinusss: Thank you!
[01:37:17] <sinusss> ask other people. i just started using mongodb 3 days ago lol...
[01:37:43] <konr> Haha, it was the one I was looking for!
[01:37:51] <konr> My coworker said it was abandoned, though
[01:39:03] <sinusss> https://github.com/iwind/rockmongo
[01:40:50] <sinusss> that's too bad though :(
[01:47:49] <ixti> http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[01:49:08] <ixti> http://github.bagwanpankaj.com/humongous/
[01:56:20] <sinusss> ixti so humongous is better than rockmongo?
[01:56:36] <ixti> never used - just proposed alternative
[01:57:23] <ixti> i never liked phpmyadmin as well
[01:57:33] <ixti> terminal is my passion :D
[01:57:40] <sinusss> ah :P
[03:37:36] <frozenlock> Hello gentlemen! I want to configure one of my machines to run a replica set. However, mongod starts automatically when Ubuntu boots (without the --replSet argument). Do I need to remove the auto-start, or is there a way to change the default configuration?
[03:44:01] <skram> frozenlock: default config should be somewhere like /etc/mongodb.conf
[03:46:28] <frozenlock> Ah! Thank you! I was looking at the /etc/init/mongodb....
[04:55:59] <frozenlock> Hmmm... my empty secondary doesn't seem to fetch the data from the primary... or at least not very fast (I'm looking at my network usage). How can I check if the secondary is indeed catching up?
[05:02:01] <skram> Have a question. I have a master-slave setup, If I include a load balancer (which i have, just disabled). If the traffic and requests is sent to the secondary it wont accept writes, correct? How would I handle this?
[05:04:50] <belak51> I'm just getting into MongoDB and I've been playing around with it… what's the best way to store files and have a reference to them in a record?
[05:04:55] <belak51> I know I can use GridFS to store them, but how is that commonly stored with other data?
[05:51:11] <badshaah> Hey, I am trying to start mongodb in clustered mode with replicaSet, on openvz
[05:51:32] <badshaah> it fails to start, without giving anything in the logs? How to tackle it?
[06:42:37] <belak51> Does a GridFS ContentType have to be a valid mime, like text/xml? Or application/pdf? Or can it just be something like the extension?
[07:36:38] <resting> so i'm looking at the example of mapreduce http://docs.mongodb.org/manual/reference/command/mapReduce/#calculate-order-and-total-quantity-with-average-quantity-per-item
[07:36:48] <resting> but where do i enter all these commands?
[07:40:18] <crudson> resting: examples are in javascript, mongodb's scripting language, so they can be entered into the shell 'mongo'. If using a different language consult that language's client API docs for syntax.
[07:43:27] <resting> crudson: i see…is there anyway to store these javascript commands as some sort of script file and have our application execute it instead?
[07:45:24] <Nodex> resting : you would be better off using Aggregation Frameowrk
[07:45:27] <Nodex> Feamework *
[07:45:33] <Nodex> omg. Framework *
[07:48:12] <resting> Nodex: thanks…so much stuff…think i'm losing my ability to read…*continues to stare at screen
[07:49:25] <resting> so aggregation framework is implies the use of .aggregate()?
[07:49:38] <crudson> resting: you can put in a file and do: load('somefile.js') from the console, that will make the functions available in the shell
[07:50:29] <crudson> makes multi-line editing easier rather than 'edit function' in shell which is volatile and discards unparsable edits
[07:51:22] <Nodex> resting : yes, yu build a query and it returns your result, it's much faster than map/reduce
[07:51:35] <crudson> if your "application" wants to use them, then that depends on your architecture - http://docs.mongodb.org/manual/core/server-side-javascript/ may help you
[07:51:53] <resting> crudson: thanks..that might come in useful
[07:52:08] <resting> Nodex: cool…will try that out first
[07:52:57] <crudson> but as nodex implies - the problem you are trying to solve may determine whether to use m/r or aggregate.
[07:58:55] <resting> hm…there is no $sum for aggregate?
[08:00:16] <crudson> resting: yes, when you use $group http://docs.mongodb.org/manual/reference/aggregation/sum/#grp._S_sum
[08:00:44] <crudson> http://docs.mongodb.org/manual/reference/aggregation/#group-operators
[08:02:00] <resting> crudson: oo…so group is sum..thanks…
[08:02:34] <resting> wrong..group has sum :)
[08:02:46] <crudson> resting: well, it can be sum...see above docs
[08:03:42] <r0goyal> I am trying to set up mongodb replica set on openVZ machines. Mongo instances are working fine in single mode but as soon as I setup them for replica mode, it fails to start. What could be the problem ?
[08:04:12] <Nodex> check the logs
[08:04:27] <r0goyal> nothing is there in logs file
[08:04:40] <r0goyal> neither in /var/log/mongodb/mongodb.log
[08:04:46] <resting> crudson: ya..more like has $sum that returns sum...
[08:04:55] <r0goyal> nor in /var/log/messages
[08:05:07] <r0goyal> Nodex: nothing is there in logs file
[08:05:15] <Nodex> define "it fails to start"
[08:05:40] <r0goyal> Nodex: its giving this message
[08:05:44] <r0goyal> Nodex: Starting database: mongodb failed!
[08:06:04] <Nodex> there has to be somehting in the log, mongo doesn't just error without some verbosity
[08:06:30] <r0goyal> Nodex: that's the weird thing..no logging at all is there.
[08:07:19] <r0goyal> Nodex: I even set verbosity to vvvvv in config file.
[08:21:10] <resting> hm..so i'm trying to sum a field within the document which is a huge object…packets: { "0": "0", "1": "200", "2": "133" ….. }
[08:21:20] <resting> db.mb.aggregate({ $match: { capture : "1369885967" } }, {$group: { _id:null, sum: {$sum:"packets"}}});
[08:21:37] <resting> sum returns 0...
[08:22:17] <resting> what is the correct command to sum the packet object?
[08:23:29] <belak51> With mongo for web apps, how are the records usually retrieved? With SQL, there's usually a unique int id, but is it good practice to expose the _id of a mongo object?
[08:25:02] <resting> or maybe i have to iterate through the packets object? :O
[08:41:16] <Nodex> resting : is "capture" a string?
[08:42:49] <resting> Nodex: you mean the value its storing? yes..
[08:43:58] <resting> capture and packets are at the same level…however packets has 1 additional level with 2000 key-values
[08:45:38] <Nodex> it should be "$packets" if you want to sum the value of the key
[08:46:20] <Nodex> if "packets" is a sub document then you either need to unwind it or use dot notation to reach into the object and grab the value you wish to cum
[08:46:22] <Nodex> sum *
[08:47:00] <Nodex> http://docs.mongodb.org/manual/tutorial/aggregation-examples/#states-with-populations-over-10-million <--- good example to follow
[08:50:21] <resting> Nodex: i dont think its a sub document..it is a key, "packets" with multiple key-values..is that considered sub document?
[08:50:37] <resting> tried with $packets till getting 0..
[08:50:40] <resting> *still
[08:51:26] <resting> i'm just afraid it doesn't traverse the "packets" object
[08:58:25] <Mr_O> hi
[09:00:42] <Mr_O> i just imported a large har / json file in a 1 record collection. I'd like to dive into this record structure to export inner arrays into proper collections.
[09:03:41] <Mr_O> i'm new to mongodb and i need a starting point. can i expose the inner structure of a record ?
[09:07:44] <resting> Mr_O: db.collection.find()
[09:09:36] <Mr_O> resting: find would return the all object, not it's inner structure
[09:10:08] <Mr_O> resting: as i understood it in the manual...
[09:13:31] <resting> Mr_O: i'm not sure if i understood "inner structure" try findOne in the shell in that case..
[09:19:11] <Mr_O> resting: i imported into mongodb a large json file into one big record. finding it makes not sense as there is only one record. i need to dive into its sub-doc hierarchy to be able to extract some of those sub-docs.
[09:22:58] <Mr_O> resting: using mysql i could ask the table description, using mongdb, how can i see a record description ?
[09:24:27] <resting> Mr_O: i see…you want to see the keys without the values? hm…i've no idea too..
[09:25:59] <Nodex> resting : you need to $unwind your sub document or "reach" in to it with dot notation
[09:26:29] <Nodex> Mr_O : your application can get all keys from a document - it's a trivial thing to do
[09:27:18] <resting> Nodex: i can't $unwind…guess its only for arrays…no hope for object?
[09:27:36] <Nodex> then reach into the bject
[09:27:46] <Nodex> it's hard to advise when I don't know what the doc looks like
[09:28:34] <resting> Nodex: :)…understand…i'd posted to SO too…http://stackoverflow.com/questions/16832035/mongodb-is-it-possible-to-aggregate-an-object
[09:29:30] <resting> lets say i could reach in, but that targets only 1 key…but i need to sum all the keys
[09:29:55] <Mr_O> Nodex: that is the issue. I don't know exactly what the doc looks like. i hope mongo would offer a "describe table" like function
[09:31:15] <Nodex> Mr_O : it's trivial to do in your applciation code
[09:32:26] <Nodex> resting : the answer you have been given is correct
[09:33:02] <Nodex> you need to adapt your data, it's also a waste of space to have numeric keys of 0,1,2,3.... etc in an object when an array will suufic
[09:33:05] <Nodex> suffice *
[09:33:09] <kali> Mr_O: https://github.com/variety/variety have a look at this
[09:35:22] <Mr_O> kali: thx
[09:39:24] <resting> Nodex: hm…i see…can i say array is easier to work with? at least in my case thats what i feel..
[09:39:57] <resting> which would make me wonder when to use an object instead
[09:42:07] <resting> or would mapreduce work?
[09:48:40] <resting> shall cont tml..
[10:13:51] <Nodex> anyone know if $or queries can use a compund index on _id + 1 more field
[10:14:21] <Nodex> for example I want to do db.foo.find({$or:[{_id:ObjectId("....")},{username:"foobarbaz"}]});
[10:14:33] <Nodex> I'm just not seeing how efficient it will be
[10:15:14] <Nodex> infact scratch that
[10:17:10] <Nodex> Derick : is there a way to validate a MongoId in the php driver, for example I would like to feed it a string and have it tell me if it is or isn't a valid ObjectId before I send it to the server
[10:17:17] <Nodex> that's a question btw !!
[10:21:09] <Nodex> I could regex it I know but it's not really bulletproof
[10:25:15] <Derick> Nodex: what do you mean by "valid"? They're just 12 bytes of data.
[10:25:27] <Derick> and doesn't the driver warn if you do it wrong now?
[10:25:46] <Nodex> yes but that's not what I am asking for
[10:25:57] <Derick> oh?
[10:26:11] <Nodex> I have a query that can take an _id or a string... for example lookup by _id or by username
[10:26:50] <Nodex> and doing an $or will throw the error and is not efficient so... I would like to test the variable and do the query one way if it's an _id or the other way if it's a normal string i/e username
[10:27:04] <Derick> preg_match('/^[0-9a-fA-F]{24}$/') should do
[10:27:20] <Nodex> yes, I understand that but it's not bullet proof - this is what I currently use
[10:27:34] <Nodex> i/e usernames can match that regex
[10:27:38] <Derick> why is it not bullet proof? It's all what the driver would be doing as well.
[10:27:50] <Nodex> I thought there might be a way to check it with the driver is all
[10:27:52] <Derick> if the driver needs to validate, then it needs to talk to the server...
[10:28:10] <Nodex> I'll just ban usernames that may match it - no problemo :D
[10:28:16] <Derick> exactly :D
[10:28:37] <Nodex> do _id's allways start with a number?
[10:28:43] <Derick> well, no
[10:28:49] <Nodex> didn't think so :/
[10:28:51] <Derick> it's possible they start with A-F too
[10:28:57] <Nodex> dang
[10:29:00] <Derick> but only far-ish in the future
[10:29:10] <Derick> the first 4 bytes are dec2hex(time());
[10:29:32] <Derick> the first 4 bytes are dechex(time());
[10:30:03] <Derick> right now you should only see them starting with a 3, 4, or 5
[10:30:04] <Nodex> I did try to getTimestamp() from new MongoId('abcd'); but it returns the current timestamp
[10:30:26] <Nodex> as this would be another way to test - i/e if the timestamp is in the past then it MUST be a valid _id
[10:30:45] <Derick> it returns current timestamp when you pass in rubbish?
[10:31:07] <Nodex> yeh - new MongoId('1234') returns an objectId somehting that starts with 5
[10:31:23] <Nodex> which is (I assume) the current _id of a new MongoId()
[10:31:24] <Derick> I get a fatal error
[10:31:33] <Derick> php -r '$a = new MongoId('1234'); var_dump($a);';
[10:31:34] <Derick> Fatal error: Uncaught exception 'MongoException' with message 'Invalid object ID' in Command line code on line 1
[10:31:39] <Nodex> ah, this server might not be on the driver version
[10:31:55] <Nodex> good to know :D
[10:32:19] <mtsr> Hi, I'm trying out the new text search feature, but I'm having issues with stemming. Even with language: 'none' it still seems to be doing some stemming, as searching for 'tests' doesn't work, while 'test' does, even though the text contains 'tests'
[10:32:21] <Nodex> out of interest why does it return a fatal error?
[10:32:38] <Derick> Nodex: it throws an exception that I didn't catch
[10:33:06] <Nodex> ok so that's another way I can test it, catch the error and set the criteria in the catch() part
[10:33:17] <Derick> yeah
[10:33:25] <Nodex> I think that's more solid
[10:33:32] <Nodex> thank you ;)
[10:44:47] <Mr_O> hi
[10:45:01] <Derick> hi
[10:49:37] <Mr_O> i ve got this one big document containing one field which is an array. i'd like to create a new collection made of this array elements.
[10:50:18] <Mr_O> can it be achieved with the aggregate function ?
[10:50:26] <Nodex> no
[10:51:00] <Mr_O> Nodex: ok
[14:14:37] <marcqualie> Hey guys, what is the best way to check if config servers are in sync on a sharded cluster?
[14:15:12] <diabel232> is it true that mongodb is sometimes losing data?
[14:16:35] <kali> yes
[14:19:26] <aboudreault> lol
[14:20:06] <gremp> Hello, is there a way to create a function that queries a collection and modify the results before returning them ( without updating the records )?
[14:20:30] <Nodex> map/reduce
[14:21:17] <kali> and AF :)
[14:22:19] <BadDesign> How do I force mongoimport interpret a column value as string even though its a number?
[14:22:58] <BadDesign> i.e. I want to import a CSV file with the first column values having values as numbers... i.e. (line1) 2132 (line2) 32435
[14:24:17] <BadDesign> but I want to store them in the document as "value": "2132" not "value": 2132
[14:27:28] <BurtyB> BadDesign, IIRC you can't
[14:28:05] <diegows> it's me or upsert in pymongo doesn't work as expected ?
[14:28:28] <BadDesign> BurtyB: when I get the fields value maybe I can determine if its a string or number and cast it
[14:28:59] <BadDesign> currently that column has string values as well as number values
[14:29:05] <BadDesign> when imported with mongoimport
[14:31:04] <BurtyB> BadDesign, as I said I don't think you can with mongoimport (when I was importing data I just wrote a script to insert it with the correct type)
[14:48:30] <diegows> question about mongo and pymong
[14:48:48] <diegows> collection.update(dict(mail_addr='test@example.com', enabled=True), dict(status="green"), upsert=True)
[14:48:57] <diegows> should produce the same result as
[14:49:26] <diegows> db.test_upsert.update({ mail_addr : "test@example.com", enabled : true}, { $set : { status : "diego" } } { upsert: true })
[14:49:33] <diegows> from mongo console
[14:49:34] <diegows> right?
[14:50:07] <diegows> I think I found a bug in Pymongo or I'm missing something
[15:12:14] <fxhp> diegows: I don't think so
[15:12:39] <fxhp> I would do the following in pymongo:
[15:12:45] <diegows> fxhp, I found the problem in pymongo
[15:12:52] <diegows> I mean, I was using it wrong
[15:13:02] <diegows> but the result are weird anyway
[15:13:24] <fxhp> agreed
[15:13:41] <diegows> collection.update(dict(mail_addr='test@example.com', enabled=True), { "$set : { "status"="green" }, upsert=True)
[15:13:46] <fxhp> what I do is use very similar python syntax for my queries
[15:13:49] <diegows> that works
[15:13:54] <fxhp> right
[15:14:47] <fxhp> dict(mail_addr='test@example.com', enabled=True) == {'mail_addr':'test@example.com', 'enabled':True}
[15:15:02] <diegows> so the bug is in MongoEngine :P
[15:15:16] <fxhp> I think that syntax is better because it is closer how it looks in mongodb
[15:15:55] <diegows> if you run this script http://paste.ubuntu.com/5717042/
[15:16:00] <diegows> you'll see the weird effect
[15:16:10] <diegows> it's ok, the first update is wrong
[15:16:18] <diegows> but the results are weird
[15:27:51] <revoohc> When failing over a replicaSet (2.2.x), what happens to writes that are issues to the cluster during the time the cluster is electing a new master?
[15:33:28] <harenson> revoohc: https://www.youtube.com/watch?feature=player_embedded&v=u5V_xYBoPzs
[15:33:56] <harenson> revoohc: https://www.youtube.com/watch?feature=player_embedded&v=8pjEFAVzUKk
[15:34:31] <harenson> revoohc: https://www.youtube.com/watch?feature=player_embedded&v=tcLJxRSzVqU
[16:26:29] <diegows> fxhp, i wasn't a bug or something weird, work as expected :P
[16:27:52] <diegows> http://docs.mongodb.org/manual/core/update/#upsert-update-operators
[16:28:21] <astropriate> is it possible to pass values to the reduce function of map-reduce?
[16:28:48] <astropriate> i want to only get documents that meet a certain criteria
[16:32:23] <hjrnunes> Hi all! Is there any particular reason why cursor.count() should result in AutoReconnect: timed out while next() works fine for the same query?
[16:57:47] <olegz> hi guys
[16:58:25] <olegz> please hint me how to select documents with distinct values of target field
[16:58:38] <olegz> but not distinct() operator
[17:12:41] <harenson> olegz: $addToSet
[17:16:22] <olegz> harenson, what you about, I am trying to fetch docs from DB but not update
[17:22:27] <harenson> olegz: who say that $addToSet operator is just for update?
[17:22:45] <olegz> ok, is any example?
[17:26:03] <harenson> olegz: maybe in 7 hours when I'll be in my home. Sorry but I'm working and here I don't have data for and mongod server running
[17:26:08] <harenson> olegz: but, search
[17:26:14] <harenson> you could do that in that way
[17:26:41] <harenson> olegz: I made it xD
[17:27:58] <olegz> does your approach use aggregation framework?
[17:28:52] <harenson> olegz: you're lucky
[17:28:57] <harenson> olegz: check this http://pastebin.com/KDbgcHzC
[17:30:38] <olegz> heh I see
[17:30:53] <harenson> olegz: here is the database https://education.10gen.com/static/m101p-april-2013/handouts/enron.0708d010cd81.zip
[17:31:04] <olegz> I have solved this problem during M102
[17:31:29] <olegz> BUT i suggest bypass AG
[17:31:36] <harenson> olegz: I was solved it in the final of M101
[17:32:15] <olegz> harenson, probably, I can not remember, cause I pass M101 and M102 at the same time
[17:32:25] <olegz> harenson, ok then, will use AG
[17:32:52] <harenson> olegz: I'm on the 4 week of the M102 xD
[17:33:46] <harenson> Lunch time for me!
[17:49:53] <astropriate> is it possible to pass values to the reduce function of map-reduce? i want to only get documents that meet a certain criteria. I am also referencing to to other documents using the _id field. is there some way to populate/join the document in the parent document and then run the map reduce?
[18:08:05] <daveluke> if i don't specify a field in the $set part of an update, does that field/value get removed?
[18:14:39] <belak51> What's the common way to store a reference to a GridFS file in a record such that the filename and extension can be retrieved from the file
[18:47:15] <daveluke> i have rows disappearing :(
[18:54:04] <jpfarias> hi guys
[18:54:20] <jpfarias> is there a way to make mongo pre-load some data to memory?
[18:54:46] <jpfarias> I have a geo query that when I run the 1st time takes ~ 20 secs to return
[18:54:50] <jpfarias> for a location
[18:55:04] <jpfarias> then next time I do the same query for that location it takes no time
[18:55:20] <jpfarias> like < 0.5 secs
[18:55:40] <jpfarias> I assume that is because it has it in the memory now
[18:56:08] <jpfarias> so I was wondering if it is possible to have it preload all of it to make all queries instant
[19:21:53] <tystr> I added a TTL index to an existing collection , and it seems to have dropped all of the exting data
[19:23:14] <tystr> the index had an expireAfterSeconds value of "604800", and was on a "createdAt" field containing ISODates
[19:39:11] <merpnderp> if you run mongodump with --dbpath will mongod crash, or just wait for the file lock to end?
[19:41:48] <merpnderp> nevermind, I see the oplog is still written to
[20:54:12] <kurtis_> Hey guys, I have multiple nested elements within my documents. There's a specific element name that I'm trying to dig in and retrieve data from. This element name exists in several 'sub-documents'. Is it possible to query for this data directly?
[20:57:02] <skram> @kurtis: i believe so, im fairly new to mongo but could you not step through the heirechy with find()?
[20:58:25] <kurtis> skram: Well -- I could use 'dot-notation' and definitely do that. But I'm hoping there's a more generic way to do this. For example "document.*.field-im-looking-for"
[20:58:49] <kurtis> It's sort of hard to explain ... but some Documents have certain fields and others don't
[20:58:59] <skram> that will probably work too, have you tried it?
[20:59:18] <kurtis> haha nope. i'll give it a shot
[20:59:34] <skram> cool, go for it
[21:01:11] <kurtis> hmm, no cigar
[21:08:14] <crudson> kurtis: no you can't do wildcard matching or xpath-style querying https://jira.mongodb.org/browse/SERVER-736 <- vote for that perhaps
[21:08:52] <kurtis> crudson: Thanks!
[21:19:06] <belak51> Hi, my IRC client died, so I didn't see if there was an answer, but what's the common way of storing files in GridFS such that the filename and extension can be retrieved? Also what's the common method of storing references to GridFS files in non GridFS records?
[21:31:55] <ackspony> belak, store your filename and extention in separate place
[21:32:36] <ackspony> why are people so interested in stupid 'filenames' and extentions
[21:33:18] <ackspony> its a weird artifact of old-fashioned flat filesystems
[21:38:32] <belak51> ackspony: because if I need to give the file back to the user, they need to know what it actually is
[21:39:11] <belak51> ackspony: so, for referencing the file, would I just store the file's _id in another field of the object?
[21:39:13] <ackspony> and a filename magically lets them know
[21:39:17] <ackspony> ?
[21:39:31] <belak51> ackspony: no, an extension lets them actually open it though
[21:39:46] <ackspony> extentions are outdated
[21:39:56] <ackspony> m-type is far more appropriate
[21:40:20] <belak51> Yes, it's more appropriate but not used in that many places right now
[21:40:35] <ackspony> except every single web browser
[21:40:45] <ackspony> since the mid 1990s
[21:40:46] <belak51> Windows, OSX, and Linux all use extensions one way or another
[21:40:53] <rpcesar> ok, i have a question. I have a working sharded replica set consisting of 6 computers divided over 2 replica sets. I have 2 sharded databases running. What I would like to do is entirely copy one of the databases and produce a secondary database with identical collections and sharding attributes, quite literally clone the database on the same replica set. whats the easiest way to do this?
[21:40:54] <ackspony> linux not so much
[21:40:59] <ackspony> they all use m-type
[21:41:06] <belak51> Not as much, but still somewhat
[21:41:08] <ackspony> all the major desktops
[21:41:22] <belak51> Yes, that's true
[22:06:19] <swaagie> is it me or can't you set the user role while doing addUser with the node.js mongodb client?
[22:24:32] <belak51> Is mongodb fault tolerant? I've been seeing a number of posts lately about how it tries, but doesn't really work. Are these true, or just a load of crap?
[22:29:28] <diegows> I have an array of numbers in my docs, and I want to sort the documents by some specific position of the array
[22:29:50] <diegows> the sort works, but Mongo complains that I need a index when I have a big collections
[22:30:04] <bundini> Seeing random crashes and pegged system recently - running 2.2.1. Our logs show thousands of " Thu May 30 00:10:46 [conn777] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: queue.txlog top: { err: "unauthorized" } " messages.
[22:30:15] <diegows> i tried with collection.ensureIndex( { array:1 })
[22:30:23] <diegows> or collection.ensureIndex( { array.0:1 })
[22:30:26] <diegows> and didn't work :(
[22:32:06] <bundini> Always with a particular connection as well. In some cases the connections (generally belonging to long running worker processes) eventually succeed (9 hour later in one case): Thu May 30 01:29:43 [conn424115] query queue.schedule query: { id: "QfyHZDozfviVA3HA" } ntoreturn:1 ntoskip:0 nscanned:13850 keyUpdates:0 numYields: 36 locks(micros) r:119108 nreturned:1 reslen:355 115ms
[22:32:11] <G________> anyone have experience with replica sets, and load balancer on rackspace?..
[22:46:58] <swaagie> why is it impossible to set roles with the node.js mongodb client
[22:46:59] <swaagie> -.-
[22:47:15] <bundini> I'm wondering what the err: "unauthorized" means as well. While googling the issue I found similar error messages but that particular attribute seemed unique to our situation.
[23:03:31] <skram> what does totalIndexSize() return as, kb's, or bytes?
[23:19:35] <bundini> skram - who was that question directed towards?
[23:19:59] <skram> anybody, but i figured it's in bytes. just ran into the documentation page for it
[23:53:31] <dw_> hello. can DBref processing be disabled in BSON/pymongo?