PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 26th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:02:09] <aaxxo> Can I request some help please? I have a query and want to pass output as json using express, but I am stuck with the correct syntax http://pastebin.com/fLBcpgsQ
[06:27:35] <synthetec> can you do mongodump to remote server
[06:30:12] <joannac> sure. install mongodump on the remote server and run it there
[08:04:55] <synthetec> is it possible to dump every database
[08:04:59] <synthetec> will just mongodump do that
[08:05:03] <synthetec> without specifying a collection
[08:07:14] <joannac> synthetec: http://docs.mongodb.org/manual/reference/program/mongodump/#cmdoption--db
[08:07:30] <joannac> " If you do not specify a database, mongodump copies all databases in this instance into the dump files."
[08:17:35] <synthetec> thank you so much joannac
[08:17:43] <synthetec> youre so damn helpful haha
[08:22:16] <synthetec> Failed global initialization: BadValue Invalid or no user locale set. Please ensure LANG and/or LC_* environment vari
[08:22:29] <synthetec> That was the output I just had when trying a mongodump
[08:22:37] <synthetec> does it just mean the user I tried was invalid?
[08:47:02] <SmitySmiter> hello
[08:47:41] <SmitySmiter> hey guys, I'm on a mac, yosemite, have mongodb installed and have been using it for a while before a few days
[08:47:57] <SmitySmiter> now, this is what happens when I run "mongo" in the terminal, http://paste.ubuntu.com/12197831/
[08:49:08] <SmitySmiter> any help guys?
[10:32:35] <student3102> I am trying to use distinct command on 3.0 driver but no luck, can someone give me example how to get results of a single column to array
[10:38:58] <student3102> anyone?
[10:41:41] <student3102> can someone please give me example of distinct command on 3.0 driver in java, I am having troubles with Tresult(got no idea what it stands for)
[10:49:46] <student3102> can I use both old and new 3.0 driver?
[10:51:13] <darius93> student3102, you could provide some code that you currently have and we could help you base on that
[10:52:25] <student3102> List states = database.getCollection("CC").distinct("State", );
[10:52:40] <student3102> I dont know what to put as second argument in distinct method
[11:15:03] <cheeser> why put anyting, student3102 ?
[11:15:13] <student3102> its required
[11:16:07] <cheeser> oh, 3.0 added a parameter.
[11:16:13] <cheeser> Document.class is fine
[11:16:36] <cheeser> well, no.
[11:16:51] <cheeser> String.class here because you want an Iterator<String> i'm guessing
[11:17:58] <student3102> and in front? before =
[11:18:19] <cheeser> what you have should be fine. try it and see.
[11:18:53] <cheeser> well, states should be a List<String>
[11:19:02] <cheeser> are you new to java as well as mongodb?
[11:20:02] <student3102> it's not working.. and no I am not new in Java, I was just loosing my head over this xD
[11:20:18] <cheeser> what does "not working" mean?
[11:20:27] <student3102> incompatible types
[11:20:54] <cheeser> oh, right. Iterable<String>
[11:21:01] <cheeser> check the API for such details
[11:21:03] <cheeser> http://api.mongodb.org/java/3.0/
[11:22:27] <student3102> exception : readString can only be called when CurrentBSONType is STRING, not when CurrentBSONType is NULL.
[11:23:15] <cheeser> i've not seen that one
[11:23:17] <student3102> let me take another look, brb
[11:23:27] <cheeser> sounds lieke maybe you have null in the State field
[11:23:55] <cheeser> if you makes states a DistinctIterable instead, you can call filter() to remove those from the results
[11:25:14] <student3102> yeah, probably, tried with another field that has no null in it, and it worked, I will try now with distnct
[11:25:53] <charnel> Hi I want to get all the records in a collection and update to 2d box . Like getting lat and long and create a [lat, lng] what is the best way ?
[11:27:05] <student3102> same thing even after using filter
[11:27:18] <cheeser> you want to add that array to the documents based on other fields in the docs, charnel?
[11:27:56] <cheeser> student3102: run a distinct query in the shell and see what you get back
[11:30:52] <charnel> cheeser, yes
[11:31:39] <cheeser> you'll have to loop through the docs and update them in your app before writing them back.
[11:31:49] <cheeser> either via the shell or a quickie app to do that.
[11:33:04] <charnel> cheeser, I was coding a php script then wondered if there is a better way to do it in MongoDb side.
[11:33:38] <charnel> I've found the bulk.find.update but could not make sure if I could pass a function to that or not
[11:34:04] <cheeser> i don't think so but i'm not a php guy.
[11:34:14] <cheeser> i'm not sure what such a method would hook into on the server side.
[11:34:35] <charnel> got it thanks. -- bulk.find.update() was in mongo --
[11:34:48] <cheeser> in the shell?
[11:55:28] <student3102> @cheeser: it found null and ""
[12:02:39] <cheeser> "" is still a string, though. that shouldn't be considered null by anything.
[12:03:07] <student3102> how can I avoid null then?
[12:15:47] <cheeser> {state : { $ne : null }}
[12:16:34] <student3102> how do I implement it in my code?
[12:17:08] <cheeser> there's a filter() on DistinctIterable. you'd pass in the Document form of that.
[12:22:13] <student3102> @cheeser: it works, thanks a lot, you probably saved me hours!
[12:57:43] <no-thing_> hello, is it possible to see the progress of syncing from primary to secondary ?
[12:58:26] <kali> no-thing_: the secondary logs will give you a hint. nothing close to a progress bar or an ETA, but at least you can see stuff moving around
[13:00:45] <no-thing_> i see
[14:23:24] <sandstrom> When using mongoid, is it possible to use a criteria (selecting on properties) before the objects are saved? That would make the code easier to use across both tests and real code (where documents will be persisted)
[14:24:50] <revoohc> Does anyone know if the mongodump output is version safe? In other words, could I mongodump a database today, archive the results, and expect to be able to restore them 2, 5, 10 years in the future?
[14:26:47] <kali> revoohc: well, mongodump is just a bson dump. and so far bson, has being remarquably stable.
[14:27:24] <kali> revoohc: i can not test it, because i have no file from that time around, but i'm pretty sure a dump from 2009 would still load
[14:27:38] <kali> revoohc: as for the future... who knows ? :)
[14:35:34] <revoohc> kali: thanks
[15:10:24] <jordonbiondo> Hello, I'm having a strange issue with performance problems across mongoose / mongo. In my code, my query takes about 13 seconds, however, when I log the query mongoose is doing, and run it directly, the same query takes about .3 seconds. Has anyone had a problem like this?
[15:11:21] <StephenLynx> yeah, mongoose is slow as hell.
[15:11:28] <StephenLynx> it is expected to behave like that.
[15:11:44] <jordonbiondo> so: Something.find({simple: 'criteria'}).then(function() {... }) takes 13 seconds. But db.getCollection('Something').find({simple: 'criteria'}) takes .3 seconds
[15:11:47] <jordonbiondo> really? That bad?
[15:11:49] <StephenLynx> yes
[15:11:54] <StephenLynx> mongoose is that crap.
[15:12:36] <StephenLynx> I am yet to see any regular user here to not crucify it.
[15:14:29] <kali> StephenLynx: i'm curious. any idea why it is so bad ?
[15:14:35] <StephenLynx> go figure.
[15:14:49] <StephenLynx> I have never bothered to read it's source code.
[15:15:12] <StephenLynx> but I would bet it has something to do with the fact it tries to make mongo behave like a relational database.
[15:15:34] <StephenLynx> the very premise it is built upon is completely wrong.
[15:15:41] <cheeser> i've yet to see anyone but StephenLynx go ballistic at the mention of mongoose, actually.
[15:15:49] <StephenLynx> alice went once.
[15:16:23] <StephenLynx> something about against the wall and being shot
[15:16:27] <doc_tuna> im not as passionate but i'm against it too
[15:16:30] <cheeser> mongoose's biggest problem is that it's written in javascript ;)
[15:16:40] <Derick> ...
[15:16:41] <StephenLynx> no is not.
[15:16:55] <StephenLynx> V8 is the single fastest scripted runtime environment in existance.
[15:17:19] <cheeser> that's a different question. js is still a terrible, terrible language.
[15:17:20] <doc_tuna> mmmm i wouldnt claim to know that
[15:17:26] <Machske> Does anyone know how to query for $set in the oplog ? => db.oplog.rs.find( { "ns": "test.Position","op": "u", "o": { "$set": { $regex: ".*pos.*"} } }, { "o": 1 } ). This results in "$err" : "Can't canonicalize query: BadValue unknown operator: $set"
[15:17:31] <StephenLynx> it isn't.
[15:17:49] <doc_tuna> lol cheeser this is not the place for subjective javascript hate
[15:17:50] <cheeser> Machske: how to what now?
[15:17:51] <StephenLynx> it has some quirks that are easily avoidable by using script mode.
[15:17:53] <Machske> but I need to know if $set is present and that it contains a pos field
[15:18:05] <StephenLynx> strict mode*
[15:18:06] <cheeser> doc_tuna: well, we're bashing on mongoose so why not? :D
[15:18:20] <StephenLynx> because mongoose is directly related to mongo.
[15:18:32] <StephenLynx> js is not.
[15:18:56] <StephenLynx> and even then, mongo relies heavily on js on some aspects.
[15:19:08] <StephenLynx> starting by the fact it uses bynary JAVASCRIPT object notation.
[15:19:30] <StephenLynx> if you condemn js as a whole, you might as well stop using mongo too.
[15:19:38] <cheeser> poppycock!
[15:21:26] <Derick> less bashing, more fixing...
[15:21:49] <StephenLynx> the only fixing to mongoose is "rm -rf /"
[15:22:23] <kali> wow, that was a great troll bait
[15:22:33] <StephenLynx> what do you mean?
[15:28:03] <doc_tuna> alt+f4 for powerups to win the game
[15:39:29] <StephenLynx> oh
[15:39:46] <StephenLynx> that was not my intention, kali :v
[15:40:04] <StephenLynx> I wrote that assuming everyone reading would understand it
[16:08:47] <Doyle> How do you interpret the following currentOp lines? The lower case r indicates a database specific lock against ^ everything?
[16:08:50] <Doyle> "locks" : {
[16:08:50] <Doyle> "^" : "r",
[16:08:50] <Doyle> "^thundercats" : "R"
[16:09:18] <StephenLynx> pastebin pls
[16:14:46] <Doyle> http://pastebin.com/neQ4rSYJ StephenLynx
[16:17:44] <Doyle> The query is operating with database specific read lock, and the DB thundercats is globally locked?
[16:24:21] <jpfarias> anyone knows why mongo config server would go out of sync?
[16:24:51] <jpfarias> I am having to re sync one of my config servers once a day after I added a couple shard servers
[16:25:21] <jpfarias> it is pretty annoying and I dont know if I am losing consistency with this
[17:05:52] <aricg> Hi, the docs say that for remote auth I need to append the public ip to the confs bind_ip = however the public ip is not avaliable on the machine with ipr its all done with magic
[17:06:42] <aricg> how can I let developers in remotly? this is just for testing right now, only username password auth is required at this time.
[17:07:45] <aricg> ipr/ ip a
[17:14:54] <StephenLynx> actually
[17:15:00] <StephenLynx> I think you can just remove the bind ip
[17:15:03] <StephenLynx> completely
[17:15:16] <StephenLynx> and it will accept connections from any ip.
[17:15:21] <StephenLynx> aricg
[17:21:04] <aricg> StephenLynx: it works over vpn on the 10. private ip (which is avaliable when you ip a on the mongo server)
[17:21:25] <StephenLynx> no idea, I don't know much about network in general.
[17:21:26] <aricg> StephenLynx: but still fails over the public ip (when bind_ip is commented out)
[17:21:32] <aricg> ah, okay thanks
[17:21:44] <StephenLynx> can you telnet on the server?
[17:21:48] <aricg> yeah
[17:21:48] <StephenLynx> that is a good test in general.
[17:21:52] <StephenLynx> then dunno
[17:22:05] <aricg> I can actually get into mongo but with no auth if I just leave out username and password
[17:22:12] <StephenLynx> ah
[17:22:16] <StephenLynx> so the networking is solved
[17:22:22] <StephenLynx> you just need to get the auth part right
[17:22:31] <aricg> ah, its not denying me entrace, it really is bound to 0.0.0.0
[17:22:47] <aricg> it just wont let me auth if im not comming from an ip that the box owns.
[17:23:07] <aricg> which is a limitation of GCE, I cant get the ip on the box. its all magic
[17:28:20] <StephenLynx> but the thing is
[17:28:42] <StephenLynx> I don't remember mongo causing a fuss about authentication based on ip, but again, I have never messed too much with authentication
[17:30:41] <cheeser> i don't think mongo cares about the IP for auth at all.
[17:31:06] <cheeser> aricg: are you on 3.0+ ?
[17:31:23] <cheeser> well, really i don't suppose it matters.
[17:31:38] <cheeser> connect to the db you want then: db.auth("user", "pass");
[17:31:55] <cheeser> that will be admin on 3.0 and also 2.6, iirc
[17:50:09] <aricg> cheeser: Ah, you are right, im wrong about the bind stuff.
[17:50:11] <aricg> AuthenticationFailed MONGODB-CR credentials missing in the user document
[17:50:51] <aricg> I'm not sure I get this, I just installed a new version of mongo and then created the users. there shouldnt be a mixup..
[17:55:35] <aricg> hmm. I think this is all bc my remote client is too old.
[18:43:35] <terminal_echo> anyone have any experience with converting mongodb to sql tables?
[18:48:05] <cheeser> heh. oh, dear. what's up?
[18:48:13] <terminal_echo> lol
[18:48:27] <terminal_echo> have some business guys who need a way to translate a mongodb to sql
[18:48:30] <terminal_echo> for sharepoint
[18:48:49] <cheeser> for reporting?
[18:48:56] <terminal_echo> yeah
[18:49:13] <terminal_echo> everything i've seen online says to export the mongo data to csv files
[18:49:21] <terminal_echo> create sql table skeletons and then import
[18:49:33] <StephenLynx> I wouldn't save to a file in the first place.
[18:49:50] <StephenLynx> I would read from mongo and write to the sql db directly
[18:49:57] <cheeser> we're working on bridgine that gap...
[18:50:05] <doc_tuna> yeah script/stream it
[18:50:10] <doc_tuna> do your writing while you're reading
[18:50:37] <terminal_echo> can you elaborate
[18:52:26] <doc_tuna> are you using node?
[18:52:34] <terminal_echo> no we are using flask
[18:53:20] <doc_tuna> i'm assuming it allows you to read documents from a collection in a loop
[18:53:39] <terminal_echo> yes
[18:54:34] <doc_tuna> and can you do deferred writing to mysql, i.e. send a query and proceed to reading your next mongo document without waiting for the callback from sql?
[18:55:42] <terminal_echo> are you saying translate the sql query into a mongo query?
[18:55:42] <doc_tuna> you may not want to let more than x number of callbacks piled up, if things go into sql slower than they come out of mongo
[18:56:22] <doc_tuna> i'm just saying you can skip the mongo->file, file->sql by writing a scrip that reads from mongo and writes to your sql table
[18:56:27] <doc_tuna> *script
[18:56:36] <doc_tuna> table(s)
[18:56:45] <terminal_echo> ah yeah
[18:57:05] <terminal_echo> we talked about doing something like this
[18:57:25] <doc_tuna> that code could be easily reused for real-time propogation as well
[18:57:38] <doc_tuna> (one advantage of that approach)
[18:58:07] <terminal_echo> the problem there is the object_id's translated to sql foreign kes
[18:58:11] <terminal_echo> gets a bit nasty
[18:58:50] <doc_tuna> yeah you have to map somehow and keep the map consistent
[19:00:57] <terminal_echo> i think we are going to use sqlalchemy to create exact model and then write to sql through it
[19:01:32] <StephenLynx> you could just have the _id be ignored an use the sql's auto increment.
[19:02:37] <geneous> gents, having mucho trouble with connection growth: http://stackoverflow.com/questions/32212027/mongodb-connections-number-growth-over-pools-limit-in-grails-app ... all wise counsel is mucho appreciated
[19:03:34] <terminal_echo> StephenLynx: yeah, what i mean is not maintaining the same id's but the foreign-key relationships that make SQL table reltionships faster
[19:03:36] <terminal_echo> the indexing stuff
[19:03:37] <cheeser> don't open a new connection for every request.
[19:03:47] <terminal_echo> no the insert statements need to be staggered
[19:04:01] <terminal_echo> i forget what they call that, like 100 insert statements at a time instead of one by one
[19:04:06] <terminal_echo> more like 1000 lol
[19:04:11] <StephenLynx> welp
[19:04:21] <terminal_echo> lo
[19:04:22] <terminal_echo> lol
[19:06:02] <cheeser> "batched?"
[19:06:25] <terminal_echo> chunking lol
[19:07:17] <doc_tuna> i think the technical term is puking
[19:07:22] <doc_tuna> puking your inserts
[19:07:34] <terminal_echo> technical term yes
[19:09:12] <doc_tuna> :]
[19:50:32] <sewardrobert> We are using mongodb with wiredTiger, however mongo will not restart after a server reboot and software upgrade. The boot fails with a backtrace. Any advice?
[19:50:45] <cheeser> check the logs
[19:51:00] <cheeser> try starting mongod manually with the correct config file.
[19:51:07] <cheeser> might be an errant lock file
[19:51:26] <jpfarias> anyone knows why mongo config server would go out of sync?
[19:51:28] <jpfarias> I am having to re sync one of my config servers once a day after I added a couple shard servers
[19:51:30] <jpfarias> it is pretty annoying and I dont know if I am losing consistency with this
[19:51:42] <cheeser> network lag maybe
[19:51:55] <cheeser> load perhaps
[19:52:27] <jpfarias> I thought there was a distributed lock or something like that to make sure they were all consistent
[19:52:41] <jpfarias> I don’t believe on network lag cause they are right next to each other
[19:52:52] <cheeser> "right next to each other?"
[19:52:57] <jpfarias> load is probably not the cause too, it is not under heavy load
[19:53:08] <jpfarias> on the same datacenter connected with a gigabit switch
[19:53:44] <sewardrobert> @cheeser The logs contain in un-informative backtrace.
[19:54:13] <cheeser> sewardrobert: and if you try to start it manually?
[19:54:44] <sewardrobert> I will try a manual restart and see what happens.
[19:54:46] <jpfarias> anyway the shards are almost done balancing so I dont think it will be a big issue
[19:55:13] <jpfarias> but it worries me that some data may have been lost
[19:56:16] <doc_tuna> jpfarias: physical proximity wont do anything if the network is saturated, but you dont think it is?
[19:57:14] <doc_tuna> probably not just pointing out that even a gigabit switch and 2 feet of cat6 can be laggy
[19:57:26] <doc_tuna> if its congested
[19:57:37] <cheeser> if there's a lot of balancing going on, congestion is a real possibility
[19:57:47] <doc_tuna> k
[21:05:36] <sewardrobert> We are using mongodb with wiredTiger, however mongo will not restart after a server reboot and software upgrade. The boot fails with a backtrace. Any advice?
[21:08:17] <sewardrobert> Starting mongod with the following command, does not produce any standard out.
[21:08:22] <sewardrobert> /usr/bin/mongod -vvvv --config /etc/mongod.conf --storageEngine wiredTiger --dbpath /var/lib/mongodb-wiredTiger
[21:08:37] <sewardrobert> I am at a loss of what to try next.
[21:30:47] <sewardrobert> We are using mongodb with wiredTiger, however mongo will not restart after a server reboot and software upgrade. The boot fails with a backtrace. Any advice?
[22:01:05] <INIT_6_> If I was to move the nmap files off the server as is. Is there a way to come back later and export the information to .json or other format?
[22:28:29] <joannac> sewardrobert: well, the exact error and backtrace would be useful
[22:28:34] <joannac> (pastebin)
[22:28:51] <joannac> INIT_6_: what are nmap files? do you mean mmap?
[22:29:17] <joannac> INIT_6_: if you shut the mongod down first, and move all the files, then sure
[22:29:40] <INIT_6_> yeah sorry mmap
[22:30:02] <INIT_6_> mongodump docs say it can't read them anymore. Changed in version 3.0.0: mongodump removed the --dbpath as well as related --directoryperdb and --journal options. You must use mongodump while connected to a mongod instance.
[22:30:13] <joannac> yes, that's right
[22:30:28] <joannac> start a mongod and run mongodump against the mongod
[22:34:05] <sewardrobert> @joannac paste binning.
[22:36:26] <sewardrobert> joannac, http://pastebin.com/xMzvMuPq
[22:36:47] <sewardrobert> backtrace is not terribly informative to me.
[22:37:52] <joannac> sewardrobert: did you shut it down cleanly?
[22:38:05] <joannac> Line 3 indicates corruption of some kind
[22:38:29] <sewardrobert> The server is on a virtual machine. I believe the hosting provider bounced or migrated the VM.
[22:38:43] <sewardrobert> So the bounce or migration may have caused corruption.
[22:39:34] <sewardrobert> Before I noticed that Mongo was down, I also upgraded packages on the server including mongo. Combination of events may have caused the failure?
[22:40:49] <joannac> what version?
[22:43:04] <sewardrobert> 3.03 before upgrade and 3.05 after upgrade.
[22:44:44] <sewardrobert> Sorry. Versions were as follows.
[22:45:14] <joannac> sewardrobert: can you look back in the logs and see if the mongod came back up on 3.0.5?
[22:46:08] <sewardrobert> Yes. I will look.
[22:46:10] <sewardrobert> Before Upgrade
[22:46:12] <sewardrobert> root@solr:~# mongod --version
[22:46:14] <sewardrobert> db version v3.0.3
[22:46:16] <sewardrobert> git version: b40106b36eecd1b4407eb1ad1af6bc60593c6105
[22:46:18] <sewardrobert> OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
[22:46:22] <sewardrobert> After Upgrade
[22:46:24] <sewardrobert> root@int:/var/lib/mongodb-wiredTiger# mongod --version
[22:46:26] <sewardrobert> db version v3.0.6
[22:46:28] <sewardrobert> git version: 1ef45a23a4c5e3480ac919b28afcba3c615488f2
[22:55:39] <sewardrobert> It would appear that mongo was up for a long time. 7 months. The VM migration / restart made mongo shutdown (perhaps uncleanly?). Mongo has never successfully restarted as 3.0.4 or 3.0.6 (it's current version).
[22:55:55] <sewardrobert> joannac, It would appear that mongo was up for a long time. 7 months. The VM migration / restart made mongo shutdown (perhaps uncleanly?). Mongo has never successfully restarted as 3.0.4 or 3.0.6 (it's current version).
[23:05:39] <joannac> just an unclean shutdown would not corrupt files afaik
[23:06:51] <sewardrobert> And yet that backtrace is all I have to go one.
[23:07:08] <sewardrobert> joannac, And yet that backtrace is all I have to go one.
[23:09:04] <joannac> sewardrobert: so when you upgraded to 3.0.4, you cleanly shut down, and then when you started again with 3.0.4 you got that error?
[23:09:04] <sewardrobert> joannac, This sequence of events is likely. VM was shutdown by hosting provider. Mongo stayed down. We are starting it manually. Mongo was upgraded from 3.0.3 originally to 3.0.5 and then again 3.0.6. 3.0.5 and 3.0.6 versions were unable to reload the "corrupted" wiredTiger database.
[23:09:47] <sewardrobert> joannac, we skipped version 3.0.4.
[23:09:54] <joannac> oh, if the corruption started after the vm shutdown, then i got nothing
[23:10:06] <joannac> probably corruption moving data across
[23:10:39] <pamp> Hi
[23:11:02] <sewardrobert> joannac, any suggestions for recovering the data?
[23:11:07] <joannac> backups?
[23:11:28] <pamp> Why writes is faster in a standalone instance than a sharded environment
[23:11:44] <sewardrobert> Ha! Funny. OK. I will try to spin another virtual on 3.0.3 and see if the DB can be opened.
[23:12:02] <pamp> I can write 20k per second in a standalone but only 3k in a sharded cluster
[23:12:21] <pamp> using an hashed _id as a shard key
[23:16:53] <joannac> pamp: yes, it's slower.
[23:18:49] <pamp> but, should not be faster
[23:18:51] <pamp> ?
[23:19:49] <joannac> no. you're adding more network latency
[23:20:14] <joannac> and possible chunk splits, chunk moves
[23:29:09] <pamp> what better way to improve the writing rate in a sharded cluster?
[23:29:19] <pamp> preSpliting?
[23:29:37] <pamp> its possible with a hashed shard key?
[23:53:53] <daidoji> pamp: what do you mean?
[23:55:56] <pamp> how can I improve write rate in s sharded cluster
[23:56:25] <pamp> i can write more than 20k in a standalone instance
[23:56:42] <pamp> and only 3 k in a sharded cluster