PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 13th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[05:21:14] <aldwinaldwin> Hello, good day or night ...
[05:22:05] <aldwinaldwin> Anybody heard about this phenomena? Thousands of chunks get created with only few documents per chunk, chunksize: 614bytes each.
[05:23:31] <aldwinaldwin> First in 3.0.3 ... then i exported the data ... installed clean 3.2.6 with WT ... configured the indexes and shard, imported ... and got the same issue again
[05:31:39] <Boomtime> can you post the command you used to shard the collection initially?
[05:37:21] <aldwinaldwin> sh.enableSharding("testShard"); use testShard; db.events.createIndex( { organization_id:1, floor_id:1, device_id:1, ttl_date: 1}); sh.shardCollection("testShard.events", { organization_id:1, floor_id:1, device_id:1, ttl_date: 1});
[05:38:14] <aldwinaldwin> there is also a ttl index on ttl_date
[05:41:27] <Boomtime> how long does a docment typically last?
[05:41:35] <aldwinaldwin> i will try with a fresh vm, fresh install, make a test case and see if i can replicate the issue easily ... just was wondering if anyone saw this situation ... cause nothing on google to be found
[05:41:36] <Boomtime> before the TTL deletes it
[05:41:52] <aldwinaldwin> ttl is set to 95 days
[05:42:10] <Boomtime> chunks are never destroyed, once they exist they exist forever essentially
[05:42:30] <aldwinaldwin> indeed, unless they are empty, then i can merge
[05:42:35] <Boomtime> right
[05:42:47] <aldwinaldwin> i was going to let a script run weekly to merge empty chunks
[05:42:56] <Boomtime> fair enough
[05:43:14] <Boomtime> but you've experienced lots of small chunks right after a big import?
[05:43:23] <Boomtime> and from an empty start?
[05:43:25] <aldwinaldwin> i'll try to create a script to replicate the issue easily. will come back to this then
[05:43:34] <Boomtime> yep, good
[05:43:40] <aldwinaldwin> empty start, same issue
[05:43:57] <aldwinaldwin> thank you
[11:56:44] <sumi> hello
[11:56:53] <Zelest> heya
[13:47:10] <scmp> Hi, in an aggregation pipeline, will/can 3.0/3.2 use multiple indexes for $sort / $match ? (as long as they come before $project/$unwind/$group)
[13:47:29] <scmp> use index A for $sort and index B for $match
[14:24:54] <jgornick> Hey folks, using the example of results with the answers array in subdocuments towards the bottom from https://docs.mongodb.com/manual/reference/operator/update/pull/, is there a way in which I can pull items from the answers array and not pull the results array item?
[14:29:21] <Ben_1> in the async driver there is a function called "into". I have to pass a param of the type "A" to that method. does somebody know what this type is?
[14:29:34] <Ben_1> tried to pass an arraylist but compiler said NO
[14:32:03] <Ben_1> thought A should be a collection and arraylist is a collection but it won't work
[14:32:36] <KodiakFiresmith> Hi folks, hoping someone might be able to help me understand something I read in the mongodump man page: --out "When writing standard output, mongodump does not write the metadata that writes in a <dbname>.metadata.json file when writing to files directly." - what are the implications of losing the metadata files by piping through gzip?
[15:06:37] <dino82> Mongo replication of ~110GB filled 380GB disks again, trying with wiredtiger engine this time
[15:10:14] <saml> that's big data
[15:27:57] <bratner> Hi! I want to store time series data samples (double-precision) in a pre-allocated array or sub-document in a document with additional metadata. Is there a way to assess the size of a document? If i create a test document, can i check what is its on-disk size?
[15:43:50] <hehnope> Does mongo free up space after it deletes TTL data?
[15:47:30] <cheeser> not with mmapv1
[15:47:58] <hehnope> how can I check that?
[15:48:07] <cheeser> if by free up space you mean "does the hard drive look like it has more free space"
[15:48:30] <hehnope> i have a TTL on a collection; data is being removed but disk space is not being freed up to the system.
[15:50:00] <hehnope> or do i just run regular repairs?
[15:50:43] <cheeser> db.serverStatus().storageEngine
[16:10:13] <duggles> Howdy. I tried converting two v2.6.4 shards into two replicated shards (2 rs + 1 arb). My router is just returning null for every query now. If I do findOne on either of the shards I get a non-null result. Can anybody point me in the right direction please?
[16:37:06] <dino82> I've read the documentation, I just wanted to be crystal clear -- do nodes stay in STARTUP2 mode until all data is replicated and they at that point turn into a SECONDARY ?
[17:33:13] <jayjo_> I'm trying to connect a domo instance to my mongodb hosted on ec2. I'm getting this error from domo, which is using java: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Is this a config issue I have on my mongo instance?
[17:35:01] <cheeser> yes
[17:35:17] <cheeser> or the driver can't see that port on ec2
[17:35:44] <cheeser> make sure you're binding to a public IP address and that you have that port visible in the security settings on ec2
[17:36:10] <jayjo_> OK, i have my bindIp as 0.0.0.0 - it should be my public ip?
[17:37:13] <dino82> Does your EC2 instance have a public IP?
[17:37:54] <jayjo_> yes it does
[17:38:12] <dino82> And your security group has TCP inbound port 27017 open?
[17:38:31] <dino82> Or whatever port you need opened
[17:39:27] <jayjo_> Yes it does. I can connect to the db using the IP address from the mongo shell
[17:39:51] <saml> where do you run java?
[17:39:58] <saml> the same machine you ran mongo shell?
[17:40:18] <cheeser> from the shell on the same machine you're trying to connect via java?
[17:42:34] <jayjo_> Well I'm trying to connect via java on a domo instance. I can connect via the shell on the EC2 instance (when I am ssh'd in, then via localhost) and on my personal machine using a host in the mongo command and through port 27017.
[17:44:26] <cheeser> so you can connect via the shell from your personal machine but not from java on your personal machine?
[17:44:41] <saml> jayjo_, show us your java app's mongodb configuration
[17:44:57] <saml> maybe it's trying to connect to localhost where mongod isn't running
[17:45:32] <saml> show us code where it initializes mongo client or something
[17:47:36] <jayjo_> The java is running on an instance I don't have shell access too... I can only connect to it via their interface. That error message is all that I have currently... I wasn't sure if it was a standard error. I'll try to get ahold of the configurations the instance is using
[17:48:19] <cheeser> might be a routing issue between wherever that host is the ec2 instance
[17:48:19] <jayjo_> But because its working from my machine & locally I think it is not a misconfiguration with the db instance, and probably with the way they are establishing the connection
[17:48:28] <cheeser> perhaps ec2 isn't configured to let that host in
[17:51:15] <dino82> Sounds like a security group issue
[17:51:21] <cheeser> yeah
[17:53:49] <saml> ask for full aws rights or quit
[17:54:17] <saml> or just play the politics game. tell your manager you're blocked by aws managing group
[17:58:43] <jayjo_> haha... let me look into the security groups. I think you're right
[19:41:49] <Forbidd3n> How should I store dates in MongoDB? I am now using UTCDateTime object
[19:42:03] <Forbidd3n> do I store that object which is an object of milliseconds?
[19:45:35] <cheeser> the drivers should transform your native Date type to BSON for you
[19:46:14] <Forbidd3n> In PHP I am using UTCDateTime(strtotime('2016-06-13')*1000);
[19:46:21] <Forbidd3n> and storing it that way, is this correct?
[19:46:27] <cheeser> i don't PHP
[19:46:54] <Forbidd3n> regardless of language. What is the best format to store in MongoDB?
[19:47:21] <cheeser> use your native date type and hand it to the driver. it'll do the right thing.
[19:47:49] <Forbidd3n> Can you show me a sample in your language of preference if you don't mind, please?
[19:48:24] <cheeser> new BasicDBObject("date", new Date())
[19:48:32] <cheeser> hand that off to the driver, and viola!
[19:48:36] <Forbidd3n> thanks
[19:48:55] <Forbidd3n> not sure I understand what you mean by hand that off to the driver
[19:49:08] <cheeser> tell the driver to save it
[19:49:13] <Forbidd3n> the driver in PHP is BSON/UTCDateTime
[19:49:50] <Forbidd3n> so I pass the PHP Date() to the driver and store the result which in my case is an object with milliseconds
[19:50:13] <cheeser> what happened when you tried?
[19:50:33] <Forbidd3n> it is showing empty in MongodDB {}
[19:50:58] <cheeser> then something else is off.
[19:51:46] <Forbidd3n> but when I print out the variable it shows - MongoDB\BSON\UTCDateTime Object ( [milliseconds] => 1465171200000 )
[19:58:32] <Forbidd3n> cheeser: what is returned with that BasicDBObject
[19:58:42] <cheeser> a BasicDBObject
[19:58:52] <cheeser> it's an object/class in the java driver
[19:59:45] <Forbidd3n> What properties does the object hold?
[19:59:51] <Forbidd3n> I'm just trying to compare
[20:00:45] <cheeser> it's just a Map/dictionary
[20:14:19] <bfig> hello, I'm having an issue importing data. I have a bson file, if I bsondump it, It says '1 objects found' - I assume this means there was no error
[20:14:55] <bfig> but if I use mongorestore I get 16619 error FailedToParse Bad characters in value: offset:17
[21:15:07] <KostyaSha> GothAlice, can i help somehow with https://jira.mongodb.org/browse/SERVER-7285?focusedCommentId=1287022&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1287022 ?
[21:15:42] <cheeser> looks fixed
[21:16:20] <KostyaSha> nofxx, i think assumption is wrong about absent upgrades between sysv->systemd scripts
[21:16:35] <KostyaSha> sorry for wrong complition (xchat fails somehow on nick completion)
[21:17:12] <GothAlice> Yeah, sorry, I'm not generally involved in that ticket, nor use those host platforms. :/
[21:17:24] <KostyaSha> cheeser, rpm has hooks and they has input parameters that allows understand whether it upgrade/delete/installation and it should be possible to stop old sysv script and run new systemd script
[21:18:45] <cheeser> well, sure. but that ticket says it's fixed.
[21:20:42] <KostyaSha> cheeser, hm.. i see only that debian picked script for 3.2.8
[21:21:43] <cheeser> yeah, i don't know any details other than sam says he fixed it. :)
[21:23:03] <KostyaSha> cheeser, "fixed" = not done :)
[21:23:10] <KostyaSha> cheeser, see body of comment that i replied
[21:23:25] <KostyaSha> cheeser, is there anybody responsible for packaging?
[21:23:49] <cheeser> there is. i don't know who though ernie is a good guess
[21:29:05] <KostyaSha> btw topic could be updated, 3.2.7 :)
[22:00:28] <Forbidd3n> I am storing dates in MongoDB as ISO dates. How would I do a query on all records that have the date in an array of dates?
[22:00:53] <Forbidd3n> for example dateBegin >= 'begindate', dateEnd <= 'endDate'
[22:11:36] <Forbidd3n> quick question on schema concept. I have port name, date, ship name and cruise line. Would it be best to make each on of these it's own document for querying or should I do date, port name and array of ships
[23:02:11] <leptone> can I overwrite a collection with the contents on another collection? something like this: db.users = db.new_users
[23:02:11] <leptone> ?
[23:04:44] <jayjo> Does anyone know of a good resource for establishing ssl connection on mysql db?
[23:04:50] <jayjo> oops, on mongodb I mean
[23:05:52] <diegoaguilar> jayjo, arent official docs enough?
[23:09:26] <jayjo> I'm struggling with it using only the docs. It appears like there are separate components... getting the CA cert to establish the identity of the database isn't enough to connect with SSL... the client certs need to be generated with this same CA. Is that right? Or maybe there's an additional resource to see it from a different angle
[23:13:16] <jayjo> Is it just enabling the mongod service with SSL, so it will have a CA file and PEM file to start, and then using that same CA to generate user certificates?
[23:36:05] <sector_0> is there a way to enforce uniqueness for a particular field in a document?
[23:36:23] <sector_0> ...well a particular field across multiple documents
[23:38:54] <GothAlice> sector_0: https://docs.mongodb.com/manual/core/index-unique/
[23:39:11] <sector_0> GothAlice, thanks
[23:39:35] <sector_0> wait a min, you again
[23:39:36] <sector_0> lol
[23:39:37] <sector_0> hey
[23:40:33] <GothAlice> Hey-o.
[23:40:52] <GothAlice> So many JIRA tickets to poke today. T_T
[23:41:20] <GothAlice> "So we have this special case…" "Yeah…" "But it's not special enough. We need it more special." "Oh… kay?"
[23:50:08] <Forbidd3n> if I have 'ports' => [ ['name'=>'Port Name','dates'=>['date1','date2','date3'] ] - I am trying to append a date to the port where name is 'Port Name'
[23:51:23] <sector_0> GothAlice, dose using indexes mean that that field is "enforced"
[23:51:32] <sector_0> ...or required?
[23:52:38] <GothAlice> sector_0: Not quite sure what you mean, but https://docs.mongodb.com/manual/core/index-unique/#unique-partial-indexes may be helpful.
[23:53:08] <Forbidd3n> I am trying to append to dates with update
[23:53:26] <GothAlice> sector_0: Depending on how you configure it, either every value must be unique (i.e. null may only show up once) or you can control the condition under which the uniqueness is constrained.
[23:53:37] <Forbidd3n> but I get cannot use part (ports of ports.dates) to traverse the element
[23:54:05] <sector_0> GothAlice, right that's what I wanted
[23:54:17] <sector_0> saw it on the same page you linked thanks
[23:54:39] <Forbidd3n> GothAlice: would it be possible to offer some help if you have time, please?
[23:55:33] <GothAlice> Forbidd3n: Unfortunately I can't really give detailed assistance at the current time (swamped with work-work), but https://docs.mongodb.com/manual/reference/operator/update/push/ would probably be what you're wanting.
[23:55:45] <GothAlice> Unfortunately, I don't have any idea what language or MongoDB driver you are using.
[23:55:47] <rpad> Setup user authentication but remote clients are able to soft-connect and not do anything. is there a way to force mongo to not even allow these connections to succeed?
[23:56:04] <GothAlice> rpad: That's non-sensical. How can a client authenticate if it can't connect?
[23:56:30] <rpad> @GothAlice Not incredible familiar with mongo. Could we force a disconnect if auth fails?
[23:56:52] <rpad> bah, darn hipchat habits
[23:56:53] <GothAlice> Then the original problem you mention, clients "connecting then doing nothing", is still unresolved.
[23:58:42] <rpad> GothAlice: I think you're right. Not a very sensical question
[23:59:16] <GothAlice> It's a bit of a chicken-egg problem; you can't verify the connection should stay alive unless you allow the connection in the first place.
[23:59:34] <GothAlice> And by then it's too late; you've allowed the connection. ;P
[23:59:58] <GothAlice> If possible, it's a good idea to use firewall rules to limit access control.