[05:21:14] <aldwinaldwin> Hello, good day or night ...
[05:22:05] <aldwinaldwin> Anybody heard about this phenomena? Thousands of chunks get created with only few documents per chunk, chunksize: 614bytes each.
[05:23:31] <aldwinaldwin> First in 3.0.3 ... then i exported the data ... installed clean 3.2.6 with WT ... configured the indexes and shard, imported ... and got the same issue again
[05:31:39] <Boomtime> can you post the command you used to shard the collection initially?
[05:38:14] <aldwinaldwin> there is also a ttl index on ttl_date
[05:41:27] <Boomtime> how long does a docment typically last?
[05:41:35] <aldwinaldwin> i will try with a fresh vm, fresh install, make a test case and see if i can replicate the issue easily ... just was wondering if anyone saw this situation ... cause nothing on google to be found
[13:47:10] <scmp> Hi, in an aggregation pipeline, will/can 3.0/3.2 use multiple indexes for $sort / $match ? (as long as they come before $project/$unwind/$group)
[13:47:29] <scmp> use index A for $sort and index B for $match
[14:24:54] <jgornick> Hey folks, using the example of results with the answers array in subdocuments towards the bottom from https://docs.mongodb.com/manual/reference/operator/update/pull/, is there a way in which I can pull items from the answers array and not pull the results array item?
[14:29:21] <Ben_1> in the async driver there is a function called "into". I have to pass a param of the type "A" to that method. does somebody know what this type is?
[14:29:34] <Ben_1> tried to pass an arraylist but compiler said NO
[14:32:03] <Ben_1> thought A should be a collection and arraylist is a collection but it won't work
[14:32:36] <KodiakFiresmith> Hi folks, hoping someone might be able to help me understand something I read in the mongodump man page: --out "When writing standard output, mongodump does not write the metadata that writes in a <dbname>.metadata.json file when writing to files directly." - what are the implications of losing the metadata files by piping through gzip?
[15:06:37] <dino82> Mongo replication of ~110GB filled 380GB disks again, trying with wiredtiger engine this time
[15:27:57] <bratner> Hi! I want to store time series data samples (double-precision) in a pre-allocated array or sub-document in a document with additional metadata. Is there a way to assess the size of a document? If i create a test document, can i check what is its on-disk size?
[15:43:50] <hehnope> Does mongo free up space after it deletes TTL data?
[16:10:13] <duggles> Howdy. I tried converting two v2.6.4 shards into two replicated shards (2 rs + 1 arb). My router is just returning null for every query now. If I do findOne on either of the shards I get a non-null result. Can anybody point me in the right direction please?
[16:37:06] <dino82> I've read the documentation, I just wanted to be crystal clear -- do nodes stay in STARTUP2 mode until all data is replicated and they at that point turn into a SECONDARY ?
[17:33:13] <jayjo_> I'm trying to connect a domo instance to my mongodb hosted on ec2. I'm getting this error from domo, which is using java: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Is this a config issue I have on my mongo instance?
[17:39:58] <saml> the same machine you ran mongo shell?
[17:40:18] <cheeser> from the shell on the same machine you're trying to connect via java?
[17:42:34] <jayjo_> Well I'm trying to connect via java on a domo instance. I can connect via the shell on the EC2 instance (when I am ssh'd in, then via localhost) and on my personal machine using a host in the mongo command and through port 27017.
[17:44:26] <cheeser> so you can connect via the shell from your personal machine but not from java on your personal machine?
[17:44:41] <saml> jayjo_, show us your java app's mongodb configuration
[17:44:57] <saml> maybe it's trying to connect to localhost where mongod isn't running
[17:45:32] <saml> show us code where it initializes mongo client or something
[17:47:36] <jayjo_> The java is running on an instance I don't have shell access too... I can only connect to it via their interface. That error message is all that I have currently... I wasn't sure if it was a standard error. I'll try to get ahold of the configurations the instance is using
[17:48:19] <cheeser> might be a routing issue between wherever that host is the ec2 instance
[17:48:19] <jayjo_> But because its working from my machine & locally I think it is not a misconfiguration with the db instance, and probably with the way they are establishing the connection
[17:48:28] <cheeser> perhaps ec2 isn't configured to let that host in
[17:51:15] <dino82> Sounds like a security group issue
[20:14:19] <bfig> hello, I'm having an issue importing data. I have a bson file, if I bsondump it, It says '1 objects found' - I assume this means there was no error
[20:14:55] <bfig> but if I use mongorestore I get 16619 error FailedToParse Bad characters in value: offset:17
[21:15:07] <KostyaSha> GothAlice, can i help somehow with https://jira.mongodb.org/browse/SERVER-7285?focusedCommentId=1287022&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1287022 ?
[21:16:20] <KostyaSha> nofxx, i think assumption is wrong about absent upgrades between sysv->systemd scripts
[21:16:35] <KostyaSha> sorry for wrong complition (xchat fails somehow on nick completion)
[21:17:12] <GothAlice> Yeah, sorry, I'm not generally involved in that ticket, nor use those host platforms. :/
[21:17:24] <KostyaSha> cheeser, rpm has hooks and they has input parameters that allows understand whether it upgrade/delete/installation and it should be possible to stop old sysv script and run new systemd script
[21:18:45] <cheeser> well, sure. but that ticket says it's fixed.
[21:20:42] <KostyaSha> cheeser, hm.. i see only that debian picked script for 3.2.8
[21:21:43] <cheeser> yeah, i don't know any details other than sam says he fixed it. :)
[21:23:03] <KostyaSha> cheeser, "fixed" = not done :)
[21:23:10] <KostyaSha> cheeser, see body of comment that i replied
[21:23:25] <KostyaSha> cheeser, is there anybody responsible for packaging?
[21:23:49] <cheeser> there is. i don't know who though ernie is a good guess
[21:29:05] <KostyaSha> btw topic could be updated, 3.2.7 :)
[22:00:28] <Forbidd3n> I am storing dates in MongoDB as ISO dates. How would I do a query on all records that have the date in an array of dates?
[22:00:53] <Forbidd3n> for example dateBegin >= 'begindate', dateEnd <= 'endDate'
[22:11:36] <Forbidd3n> quick question on schema concept. I have port name, date, ship name and cruise line. Would it be best to make each on of these it's own document for querying or should I do date, port name and array of ships
[23:02:11] <leptone> can I overwrite a collection with the contents on another collection? something like this: db.users = db.new_users
[23:05:52] <diegoaguilar> jayjo, arent official docs enough?
[23:09:26] <jayjo> I'm struggling with it using only the docs. It appears like there are separate components... getting the CA cert to establish the identity of the database isn't enough to connect with SSL... the client certs need to be generated with this same CA. Is that right? Or maybe there's an additional resource to see it from a different angle
[23:13:16] <jayjo> Is it just enabling the mongod service with SSL, so it will have a CA file and PEM file to start, and then using that same CA to generate user certificates?
[23:36:05] <sector_0> is there a way to enforce uniqueness for a particular field in a document?
[23:36:23] <sector_0> ...well a particular field across multiple documents
[23:40:52] <GothAlice> So many JIRA tickets to poke today. T_T
[23:41:20] <GothAlice> "So we have this special case…" "Yeah…" "But it's not special enough. We need it more special." "Oh… kay?"
[23:50:08] <Forbidd3n> if I have 'ports' => [ ['name'=>'Port Name','dates'=>['date1','date2','date3'] ] - I am trying to append a date to the port where name is 'Port Name'
[23:51:23] <sector_0> GothAlice, dose using indexes mean that that field is "enforced"
[23:52:38] <GothAlice> sector_0: Not quite sure what you mean, but https://docs.mongodb.com/manual/core/index-unique/#unique-partial-indexes may be helpful.
[23:53:08] <Forbidd3n> I am trying to append to dates with update
[23:53:26] <GothAlice> sector_0: Depending on how you configure it, either every value must be unique (i.e. null may only show up once) or you can control the condition under which the uniqueness is constrained.
[23:53:37] <Forbidd3n> but I get cannot use part (ports of ports.dates) to traverse the element
[23:54:05] <sector_0> GothAlice, right that's what I wanted
[23:54:17] <sector_0> saw it on the same page you linked thanks
[23:54:39] <Forbidd3n> GothAlice: would it be possible to offer some help if you have time, please?
[23:55:33] <GothAlice> Forbidd3n: Unfortunately I can't really give detailed assistance at the current time (swamped with work-work), but https://docs.mongodb.com/manual/reference/operator/update/push/ would probably be what you're wanting.
[23:55:45] <GothAlice> Unfortunately, I don't have any idea what language or MongoDB driver you are using.
[23:55:47] <rpad> Setup user authentication but remote clients are able to soft-connect and not do anything. is there a way to force mongo to not even allow these connections to succeed?
[23:56:04] <GothAlice> rpad: That's non-sensical. How can a client authenticate if it can't connect?
[23:56:30] <rpad> @GothAlice Not incredible familiar with mongo. Could we force a disconnect if auth fails?
[23:56:53] <GothAlice> Then the original problem you mention, clients "connecting then doing nothing", is still unresolved.
[23:58:42] <rpad> GothAlice: I think you're right. Not a very sensical question
[23:59:16] <GothAlice> It's a bit of a chicken-egg problem; you can't verify the connection should stay alive unless you allow the connection in the first place.
[23:59:34] <GothAlice> And by then it's too late; you've allowed the connection. ;P
[23:59:58] <GothAlice> If possible, it's a good idea to use firewall rules to limit access control.