PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 14th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:14] <GothAlice> I.e. if your authorized users connect from consistent IPs. If they don't, VPNs might become a more usable solution.
[00:00:38] <GothAlice> These are practical approaches to limiting connections.
[00:01:51] <Forbidd3n> GothAlice: in a nutshell, I am trying to append to dates but ports.dates doesn't work, would I need {ports: {port: {name: 'Port Name', 'dates'[{date:'date1'},{date:'date2'}]}}
[00:02:31] <GothAlice> rpad: (Similar problem to authentication over HTTP; you need to allow the connection to send the credentials, by which time it's too late to deny the _connection_ itself, even if the credentials are unsatisfactory; you just have to return an error message and hope the client understands.)
[00:03:08] <GothAlice> Forbidd3n: Your nesting several levels deep. "dates" there is "ports.port.dates".
[00:03:10] <Forbidd3n> I currently have it as this - {ports: [{name:'Port Name',dates: [{date:'date1'},{date:'date2'}]}] (not sure if my brackets are correct, but it is an example)
[00:03:31] <Forbidd3n> GothAlice: correct so I will need port inside of ports for each object, correct
[00:03:43] <Forbidd3n> to be able to update it
[00:08:29] <Forbidd3n> GothAlice: so my question is I would need ports.port.dates I can't update a date with just ports.dates
[00:09:51] <jayjo> Do I have to generate an .pem file for each user on my db and then distribute the key to them somehow if I need to use SSL?
[00:11:53] <GothAlice> Forbidd3n: You've given two completely different structures as examples. The most recent one has "ports" as a list of embedded documents, the first had "ports" as a single embedded document. There's a world of difference. In your most recent one, "ports.dates" is a thing, and would work to $push to. In the first, it's the deeper "ports.port.dates" reference.
[00:13:14] <GothAlice> I'm very sorry, however, that I don't have the brainpower needed to properly understand your question right now.
[00:13:36] <Forbidd3n> GothAlice: in my first example I had ports which is an array of objects {name: 'Port Name',dates{}} and in the second I added 'port' and moved the object below port
[00:14:40] <Forbidd3n> I am trying to update dates and add a date if it doesn't exist, but the issue is ports.dates doesn't allow it since it doesn't know which port object to update
[00:15:21] <Forbidd3n> port.dates, so that is why I am thinking I need ports.port.date so when I find the ports.port.name that matches then I can updated that ports.port.dates
[00:15:41] <Forbidd3n> cheeser: you around? maybe you can shed some light on my question if you have time
[00:42:21] <Forbidd3n> Let me give a better example. Say I have {authors: [{name: 'John', books: [{'b1'},{'b2'}]}] - how would I append to books where name is $eq to 'John' ?
[00:43:51] <Forbidd3n> I tried update({authors.name: 'John', {$addToSet { authors.books: {'b3'} }}});
[00:44:43] <Forbidd3n> but giving a traverse error since it doesn't know the record of John to update in authors
[00:45:15] <Forbidd3n> ugh, that isn't a good example of my issue. NVM!
[01:44:24] <Forbidd3n> I am doing an $and on two different fields for update, but it isn't doing $addToSet - here is my code in PHP - maybe someone can help please. - http://pastebin.com/raw/yZT7uc7m
[01:44:59] <Forbidd3n> if the schedules.port exists and the schedules.date doesn't it still isn't doing $addToSet
[02:42:08] <jgornick> Hey folks, using the example of results with the answers array in subdocuments towards the bottom from https://docs.mongodb.com/manual/reference/operator/update/pull/, is there a way in which I can pull items from the answers array and not pull the results array item?
[03:19:32] <jayjo_> I'm getting an error still trying to set up SSL on my mongodb instance. I'm not able to get this to work... cannot read PEM key file: /home/ubuntu/certs.../ error:0906D06C:PEM routines:PEM_read_bio:no start line
[03:23:54] <Boomtime> @jayjo: the PEM file needs to be in PEM format, and actually exist - run this: openssl x509 -text -in <pemfile>
[03:28:28] <jayjo_> I still get unable to load certificat, even though I can cat it: 140111204554400:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE
[03:31:16] <Boomtime> server strat-up log to a gist/pastebin
[03:31:35] <Boomtime> also, the output of the openssl command i provided
[03:31:54] <Boomtime> (it only prints the public certificate, not the private key, but that needs to be in the file too)
[03:35:41] <jayjo_> the logs are saying the same thing... cannot read PEM key file:. The output from the command was unable to load certificate, even though it's there. I'm going to redo the certificate process... but to make sure this is correct logic. I create the CA Authority. I will then sign a cert from this CA so the daemon mongod will run with the CAFile and a PEMFile, and the create additional client certificates usin
[03:35:47] <jayjo_> g the same CA to be distributed to client services?
[03:36:19] <Boomtime> or you could, you know, try to actually debug what you have
[03:37:03] <Boomtime> but if you think that re-doing the same procedure which didn't work last time, has a shot of working if you do the exact same thing over again, then sure
[03:37:57] <Boomtime> to answer your question; it is preferable if clients use the same CA for their certificates - it is a requirement that all servers use the same CA
[03:38:54] <Boomtime> also, clients don't strictly need their own certs (allowConnectionsWithoutCertificates) but it can be helpful to increase security
[03:39:36] <jayjo_> but it's not a requirement that a client knows anything about the CA that signed the server cert?
[03:40:04] <Boomtime> that's up the client, does the client want to verify the server cert?
[03:40:37] <Boomtime> if you want a client to actually be capable of detecting MITM attacks then yes, they need the CA cert
[03:40:59] <Boomtime> that's basic SSL/TLS btw, nothing to do with mongodb
[03:41:16] <sector_0> if I have 2 databases (dbA, and dbB) would an operation on dbA block an operation on dbB?
[03:42:27] <Boomtime> @sector_0: no - but the answer can actually be "it's complicated" - for example, dropping database B affects the global namespace so may briefly impact new operations on datbaase A
[03:43:14] <Boomtime> in general, operations on one database have no direct bearing on operations on another database, but if you max out your cpu then everything is affected
[03:43:54] <sector_0> Boomtime, well I was more referring to lookups
[03:44:11] <Boomtime> queries, generally no
[03:44:16] <sector_0> ok cool
[03:45:48] <Boomtime> there is a lot of nuance to this though, in several places because the host machine only has a certain amount of capacity; cpu, disk, and memory are all fixed
[06:30:14] <Forbidd3n> anyone that can assist with this error - the operator must be the only field in a pipeline object (at '$lte'
[06:42:43] <Boomtime> @Forbidd3n: pastbin/gist your aggregation pipeline, or the stage that has that in it
[06:48:16] <Forbidd3n> Boomtime: does this look alirght for my aggregate query criteria? - http://pastebin.com/raw/76fpLcv7
[06:48:50] <Forbidd3n> it is bringing back results, but it is bring back all schedules, not just ones that match the $match criteria
[06:54:39] <Boomtime> @Forbidd3n: that agg pipe is not valid JSON
[06:55:06] <Forbidd3n> let me repaste it one second
[06:55:09] <Boomtime> -> [$$schedule.date,
[06:55:36] <Boomtime> unless this is node and you have a javascript variable named "$$schedule" it isn't valid
[06:55:41] <Forbidd3n> this should be - http://pastebin.com/raw/TPiJTAzc
[06:56:03] <Boomtime> better..
[06:56:03] <Forbidd3n> I am formulating this in PHP to create the query array
[06:57:20] <Boomtime> erg.. you use formatted strings for timestamps?
[06:57:46] <Forbidd3n> it is ISO format
[06:57:49] <Boomtime> is this a scalar value -> "schedules.date"? or an array?
[06:57:49] <Forbidd3n> date('c'
[06:58:07] <Boomtime> why not use the date datatype?
[06:58:12] <Boomtime> why convert to string at all?
[06:58:31] <Forbidd3n> it is a value inside of an object
[06:58:54] <Forbidd3n> what format should the date be in for storing in MongoDB?
[06:58:58] <Boomtime> anyway, nevermind the date sillyness, "schedules.date" is it scalar or an array?
[06:59:11] <Forbidd3n> I asked earlier and someone suggested ISO date
[06:59:27] <Boomtime> they would have said ISODate
[06:59:31] <Boomtime> which is a datatype
[06:59:55] <Forbidd3n> ISO date correc, but they said to use date('c', strtotime('2016-06-14'))
[07:00:13] <Boomtime> it doesn't matter how you format a string, it's still a string - it is certianly nice that it's happens to be a recognized ISO format, but it's still a string
[07:00:25] <Forbidd3n> either way the ISO date string should work with comparisons
[07:00:34] <Boomtime> comparisons yes,
[07:00:42] <Forbidd3n> ok, how should it be stored
[07:00:49] <Forbidd3n> I don't mind changing it to the correct format
[07:00:53] <Boomtime> as a date datatype
[07:00:59] <Boomtime> just give it the date object
[07:01:19] <Forbidd3n> so store it like new DateTime('2016-06-14'); in PHP
[07:01:19] <Boomtime> in the shell this shows up as ISODate
[07:01:27] <Boomtime> that sounds right for PHP
[07:01:49] <Forbidd3n> that won't be ISO I don't believe, but back to the comparison
[07:02:18] <Boomtime> what? ISO is a printed string format, it has absolutely nothing to do with language types
[07:02:20] <Forbidd3n> what would be incorrect that it returns all results and not just the condition results
[07:02:41] <Boomtime> if you used a real language type then you could use these -> https://docs.mongodb.com/manual/reference/operator/aggregation-date/
[07:02:51] <Forbidd3n> ok I will modify it to PHP DateTime object
[07:03:11] <Boomtime> and calculations like $today - $yesterday would give sensible answers
[07:03:34] <Boomtime> or in aggregation: $subtract: ['today', 'yesterday']
[07:03:43] <Boomtime> anyway...
[07:03:53] <Forbidd3n> gotcha, I will modify it, because I agree with you on that
[07:04:24] <Boomtime> -> "schedules.date"
[07:04:50] <Boomtime> is "schedules" an array? or is "date" an array? or are both of these single scalar values?
[07:05:25] <Forbidd3n> it is a value inside of {schedules: [{port:'P1','date'=>'D1'},{port:'P2',date:'D2'}]} and so on
[07:05:40] <Forbidd3n> that is a sample of what schedules looks like
[07:05:41] <Boomtime> heh
[07:05:53] <Boomtime> the $match probably doesn't do what you think it does
[07:06:09] <Boomtime> remember, you are matching a _document_ not an entry in the array
[07:06:40] <Boomtime> thus any _document_ that can satisfy the $gte and independently satisfy the $lte will pass
[07:06:44] <Forbidd3n> wouldn't schedules.date be all documents that have a date that matches
[07:07:08] <Forbidd3n> correct, it will return the entire docuement, if any of the dates pass
[07:07:19] <Boomtime> yep, and how many dates does each _document_ have that can be used to try for satisfying each _independent_ clause?
[07:07:21] <Forbidd3n> the issue is the condition in the filter is returning all results
[07:07:39] <Boomtime> yep, i'm not surprised
[07:07:52] <Boomtime> how many dates does each _document_ have that can be used to try for satisfying each _independent_ clause?
[07:08:01] <Forbidd3n> shouldn't condition in the filter filter out the ones that are gte and lte what is passed
[07:08:14] <Boomtime> yep, that is exactly what it is doing
[07:08:32] <Boomtime> how many dates does each _document_ have that can be used to try for satisfying each _independent_ clause?
[07:08:40] <Forbidd3n> put is it returning dates into 2016-07-01 when the end date is lte 2016-06-30
[07:08:48] <Boomtime> yep
[07:08:50] <Boomtime> how many dates does each _document_ have that can be used to try for satisfying each _independent_ clause?
[07:09:04] <Forbidd3n> Boomtime: I don't understand your question, sorry
[07:09:11] <Boomtime> ok.. i'll give an example
[07:09:44] <Boomtime> $match: { number: { $gte: 4, $lte: 2 } }
[07:09:55] <Boomtime> is it possible for any document to pass that?
[07:10:42] <Forbidd3n> no gte 4 and lte 2 - then it would be an and and nothing is between them
[07:10:48] <Boomtime> this matches -> { number: [ 0, 5 ] }
[07:10:49] <Forbidd3n> gte 2 and lte 4 would though
[07:11:02] <Forbidd3n> if it is an or search, then yes
[07:11:10] <Boomtime> as written
[07:11:23] <Boomtime> this matches -> { number: [ 0, 5 ] }
[07:11:28] <Forbidd3n> correct
[07:11:48] <Boomtime> do you see why your $match matches everything then?
[07:12:23] <Forbidd3n> yes, because I was mistaken by the gte an lte being an and comparison
[07:12:36] <Boomtime> they are an 'and' - but at the document level
[07:12:55] <Forbidd3n> so at the condition filter level they are not?
[07:12:59] <Boomtime> the question as posed for each clause is "does this _document_ match?
[07:13:18] <Boomtime> you are finding documents, not individual bits
[07:13:21] <Forbidd3n> the answer would be yes, all dates would match
[07:13:39] <Boomtime> no, all _documents_ would match, in your specific case
[07:13:46] <Boomtime> you need this -> https://docs.mongodb.com/manual/reference/operator/query/elemMatch/
[07:13:52] <Forbidd3n> I thought the filter condition search objects in schedules for date match
[07:14:02] <Boomtime> it DOES
[07:14:08] <Boomtime> that is exactly what it is doing
[07:14:14] <Forbidd3n> elemMatch only brings back one result I thought]]
[07:14:24] <Boomtime> huh?
[07:14:47] <Boomtime> one result of what? a document only needs to match once
[07:14:49] <Forbidd3n> so with elemMatch how would I do a gte an lte range?
[07:15:04] <Boomtime> there is an exmaple of that right there, please read it
[07:16:18] <Forbidd3n> let me give it a try, so I don't need condition in this then
[07:22:35] <Forbidd3n> Boomtime: still pulling back results not within range
[07:23:18] <Forbidd3n> Boomtime: this is my json encoded string -
[07:23:18] <Forbidd3n> {"schedules":{"$elemMatch":{"date":{"$gte":"2016-06-01T00:00:00+00:00","$lte":"2016-06-30T00:00:00+00:00"}}}}
[07:24:30] <Forbidd3n> Boomtime: still around?
[07:25:23] <Boomtime> isn't "schedules" the array?
[07:25:43] <Boomtime> so it will elemMatch for the "date" field which is scalar anyway...
[07:26:43] <Boomtime> huh, it's an infix operator.. ok, whatever, can you paste a sample document?
[07:26:47] <Forbidd3n> it is an array of objects correct
[07:29:06] <Forbidd3n> Boomtime: here is a sample document from MongoDB - http://pastebin.com/raw/KbC2931y
[07:29:42] <Forbidd3n> if I do a query for date from 6-6/6-9 I only want those schedules returned on the result
[07:29:57] <Forbidd3n> not 6-10 or higer
[07:29:59] <Forbidd3n> higher
[07:31:18] <Boomtime> wait what?
[07:31:25] <Boomtime> this document is a match
[07:31:41] <Boomtime> and you seem to know it's a match.. i've said from the start that you match _documents_
[07:31:51] <Forbidd3n> no I exported a sample list from the mongo
[07:31:59] <Boomtime> if you want to re-form the document stream to break it up, you will need to run a $unwind
[07:32:05] <Forbidd3n> that is a sample set that I want to query and match
[07:32:30] <Boomtime> how many documents do you think are in the sample you provided?
[07:32:39] <Forbidd3n> Boomtime: sorry I'm getting confused
[07:32:55] <Forbidd3n> with that sample I want to return dates from 6/6 through 6/9
[07:33:02] <Boomtime> that last pastebin link, how many documents are in it? (by your count)
[07:33:31] <Forbidd3n> 1 document, schedules is an array of 9 objects
[07:33:37] <Boomtime> perfect
[07:33:49] <Boomtime> the $match that you run matches _documents_ - this one document is a match
[07:33:56] <Forbidd3n> I need it to return the one document with 4 objects in schedules
[07:34:02] <Boomtime> thus you get this document in the return value
[07:34:08] <Forbidd3n> yes, but it was returning all schedules
[07:34:23] <Forbidd3n> I just want schedules that matched the condition
[07:34:23] <Boomtime> of course, because that is all part of the _document_
[07:34:30] <Boomtime> right, that is a different question
[07:34:40] <Forbidd3n> that is the question I am trying to get answered :)
[07:34:54] <Forbidd3n> the result was returning the document before
[07:34:54] <Boomtime> https://docs.mongodb.com/manual/reference/operator/aggregation/unwind/
[07:35:18] <Boomtime> the result will ALWAYS return documents - mongodb is a document store
[07:35:30] <Boomtime> but you can affect the content of those documents
[07:35:36] <Boomtime> https://docs.mongodb.com/manual/reference/operator/aggregation/unwind/
[07:36:11] <Forbidd3n> not true. I use filter/condition on match that equals a date and it work great
[07:36:41] <Forbidd3n> Boomtime: like this example - http://stackoverflow.com/questions/3985214/retrieve-only-the-queried-element-in-an-object-array-in-mongodb-collection
[07:36:47] <Forbidd3n> MongoDB 3.2 Update
[07:36:48] <Boomtime> yes, true, it still returns a document - you just modified what the document contains
[07:36:57] <Boomtime> as a seperate filter condition
[07:37:08] <Forbidd3n> that works if I have $eq as the condition
[07:37:08] <Boomtime> the filter didn't even need to have any relationship to the query
[07:37:47] <Boomtime> it's a seperate operation, and that's the same as for aggregation - it can't magically read your mind
[07:37:49] <Forbidd3n> correct, I need to modify what the document return on the $gte and $lte condition
[07:38:10] <Boomtime> that's the question you wanted answered all along
[07:38:15] <Forbidd3n> ye
[07:38:16] <Forbidd3n> yes
[07:38:18] <Boomtime> https://docs.mongodb.com/manual/reference/operator/aggregation/unwind/
[07:38:40] <Forbidd3n> why would I have to unwind it, which puts all elements into it's own object
[07:38:40] <Boomtime> however, you reckon you can do it with a query - be my guest, please
[07:39:07] <Forbidd3n> Boomtime: I was asking why the gte and lte isn't working on condition
[07:39:15] <Boomtime> it's working perfectly
[07:39:40] <Forbidd3n> the condition in the filter is pulling results outside of the range
[07:40:28] <Boomtime> no it isn't
[07:40:31] <Boomtime> show me an example
[07:41:01] <Boomtime> provide an example of a document and a $match or a find that matches when it shouldn't
[07:41:42] <Forbidd3n> that sample on SO works if I have a condition of $eq
[07:42:14] <Forbidd3n> it returns the right documents with limited schedules based on filter conditoin
[07:42:16] <Forbidd3n> condition
[07:42:39] <Boomtime> yep
[07:43:25] <Forbidd3n> if $gte and $lte are and comparisons then it isn't working with them in the filter condition
[07:44:26] <Boomtime> the one example you provided so far is working correctly - provide an example that isn't please
[07:45:16] <Forbidd3n> Boomtime: this $filter example in the docs works with $gte but I can't get it to work iwth $gte an $lte - https://docs.mongodb.com/master/reference/operator/aggregation/filter/#exp._S_filter
[07:46:53] <Boomtime> hmm.. i missed $filter in the pastebin you provided earlier.. gimme a sec
[07:47:15] <Boomtime> of course, the $match probably needs to be fixed first
[07:48:10] <Forbidd3n> the matches pulls back the document with all scheduesl because is matches, the filter doesn't seem to be working
[07:48:17] <Forbidd3n> when using gte and lte
[07:50:20] <Boomtime> right, gotcha, i thought the only issue was in the $match.. let me re-try this
[07:50:38] <Forbidd3n> sorry if I was confusing it
[07:52:46] <Forbidd3n> I think I have it working
[07:53:21] <Forbidd3n> yeah I have it working Boomtime
[07:53:51] <Boomtime> ok, i take you modified the filter condition - it looks like an odd construction, but i haven't checked it fully yet
[07:54:32] <Forbidd3n> yeh, I wrapped them in an $and
[07:55:33] <Boomtime> that is not what i was expecting..
[07:55:46] <Boomtime> but whatever works
[07:55:59] <Forbidd3n> it works great actually - thanks and sorry for confusion
[09:03:48] <chris|> does anybody know what the non-deprecated version of autoIndexId=false is?
[09:16:06] <lipiec> I would like to install newest mongodb (3.2.7) on Debian Wheezy, but current package of mongodb-org-server is not installable on this release.
[09:16:37] <lipiec> Package from official mongodb repository requires libc6 with version >= 2.14
[09:17:06] <lipiec> But the newewst version for Debian Wheezy to install is:
[09:17:23] <lipiec> Version table:
[09:17:23] <lipiec> 2.13-38+deb7u11 0
[09:17:23] <lipiec> 500 http://security.debian.org/ wheezy/updates/main amd64 Packages
[09:19:05] <Calinou> then you'll need to compile from source
[09:19:24] <Calinou> (or upgrade to Debian 8 already, it's been out for a year)
[09:21:03] <lipiec> Thanks, but what's the point then in having on the official mongodb site release for Debian Wheezy?
[09:21:24] <lipiec> So i cannot event install in on this release?
[09:22:15] <Calinou> ah
[09:24:10] <kurushiyama> lipiec You may throw stones at me, but what are you going to do when you need support? (SLAs and stuff comes to my mind). Afaik, Debian is not supported, but just provided.
[09:27:14] <be_mtk> hello
[09:27:56] <be_mtk> I wanted to ask you a question about replication. I have 3 hosts running replicaset name 'ReplicaA'
[09:28:23] <be_mtk> I've configured 4th one with the same replicaset name
[09:28:35] <lipiec> kurushiyama: I am just curious, because to me it just seems like someone build this package just on newer Debian release.
[09:29:11] <lipiec> kurushiyama: So mongodb does not even requires libc6 in newer version, just package was build for exmaple on Debian Jessie.
[09:29:30] <be_mtk> what can happen, if the 4th host will already have some collections different from what's on primary?
[09:29:40] <be_mtk> will all collections get merged and replicated?
[09:29:43] <kurushiyama> lipiec Well, tbh, when it comes to packaging, quite some things could be improved. I have a refactoring of the RPM packages on my todo for more than half a year now....
[09:29:59] <kurushiyama> be_mtk Dream of it.
[09:30:18] <kurushiyama> be_mtk How should MongoDB decide on conflicts?
[09:30:33] <be_mtk> or just primary host data overwrites all collections on 4th host
[09:30:37] <be_mtk> ?
[09:31:22] <kurushiyama> be_mtk Actually, I have never tried such a stunt.
[09:31:47] <be_mtk> well, if you take as an assumption that there are no conflicts neither in db names nor in collections
[09:32:56] <kurushiyama> be_mtk My guess is that the data in collections existing on the primary gets overwritten and the data in collections that do not exists on the other nodes... Well, who knows. But you are aware of the fact that a 4 node replset is a bad idea, in itself?
[09:33:21] <be_mtk> no, why?
[09:33:30] <be_mtk> I had plan to eventually have six nodes
[09:33:46] <kurushiyama> be_mtk If your data is that cheap to you, you might as well delete it and have a proper initial sync. If not, merge it before adding the node.
[09:33:47] <be_mtk> and remove 3 old nodes
[09:34:23] <be_mtk> my case is that I have 3 nodes running out of space
[09:34:36] <be_mtk> so I wanted to add 1 node with bigger storage
[09:34:39] <be_mtk> check if it works
[09:34:50] <be_mtk> add another two nodes with bigger storage
[09:35:01] <be_mtk> and remove old 3 nodes with small storage
[09:35:13] <be_mtk> so that I can grow disk capacity without downtime
[09:35:40] <kurushiyama> Sorry, but what is so damn hard on reading the docs : https://docs.mongodb.com/manual/core/replica-set-architectures/#deploy-an-odd-number-of-members
[09:36:22] <kurushiyama> be_mtk An even number of replica set member will actually give you less security than the same number -1
[09:36:52] <kurushiyama> be_mtk And increasing the number of members does _not_ increase the available disk space for the databases.
[09:37:08] <kurushiyama> be_mtk A replica set is roughly comparable to a RAID.
[09:37:20] <be_mtk> hold on
[09:37:22] <kurushiyama> be_mtk With Mirroring only.
[09:37:33] <be_mtk> yes that true
[09:38:00] <be_mtk> but please consider 3 hosts having 1 TB disks running single replicaset
[09:38:18] <kurushiyama> be_mtk Yes. Payload? 1TB
[09:38:26] <be_mtk> more or less
[09:38:38] <be_mtk> and it's soon going to be full
[09:38:46] <kurushiyama> be_mtk Nope. That is the maximum amount the replica set will be able to hold.
[09:39:22] <kurushiyama> be_mtk So, in order to increase the disk space, you correctly found out that you can do a rolling expansion.
[09:39:44] <kurushiyama> be_mtk BUT: Using a node already bearing data for this is a gamble at best.
[09:40:55] <be_mtk> so I should have node having only 'local' database? or should I remove local as well before joining replicates?
[09:41:00] <be_mtk> so I should have node having only 'local' database? or should I remove local as well before joining replicaset?
[09:41:02] <Zelest> It's still not possible to run 6 replica nodes, right? That is, you need/should have an odd number of nodes?
[09:41:22] <kurushiyama> be_mtk And for a rolling migration, for each node added, I would remove one node right away, in order to always have an odd number of nodes. Reason: The initial sync of 1TB is going to take a while, and you do not want to risk getting into secondary state.
[09:42:30] <kurushiyama> Zelest You _can_ run a replica set with 6 nodes. However, when 3 nodes are down, your cluster relegates to secondary state, the same as within a 5 member replica set.
[09:42:53] <Zelest> Ahh
[09:43:08] <Zelest> Yeah, that makes sense :)
[09:43:26] <kurushiyama> Zelest BUT since one node more is involved, chances slightly increase for that situation. So in general, you are better of with an odd number, even when it is smaller than the even number.
[09:44:06] <Zelest> yeah
[09:44:10] <be_mtk> yes
[09:45:03] <kurushiyama> Bottom line: You pay more for a 6 member replica set, and the only "benefit" you have is that this setup increases the probability that your replica set becomes unusable (from the applications point of view)
[09:46:23] <kurushiyama> be_mtk Hence: Add a new member, and as soon as you see it added, remove one of the old secondaries .
[09:46:52] <kurushiyama> be_mtk Wait for the new member to finish the initial sync, rinse, repeat.
[09:47:00] <be_mtk> thanks
[09:49:52] <be_mtk> I have also one more question about the performance - when having host with 64GB of ram, is it a good strategy to increase tmpfs to sth like ~8/16gigs?
[09:50:29] <be_mtk> I mean mount /dev/shm with defaults,size=8G option
[09:52:47] <BurtyB> be_mtk, depends what you're doing with the box I guess, but I don't think mongodb uses it
[09:53:28] <be_mtk> I'd use that box for mongo only
[09:53:32] <kurushiyama> be_mtk I guess no, since both WT and MMAP use the FS-cache, iirc.
[09:54:21] <bassory99> hi there
[09:54:34] <be_mtk> I was inspired by this article http://edgystuff.tumblr.com/post/49304254688/how-to-use-mongodb-as-a-pure-in-memory-db-redis
[09:54:42] <bassory99> new to mongodb and have a question regarding sharding/replication and objectid
[09:54:43] <kurushiyama> be_mtk Dont
[09:54:45] <be_mtk> (I'll be using MMAPv1)
[09:54:55] <kurushiyama> be_mtk DONT
[09:55:43] <kurushiyama> be_mtk wt has document level locking. That alone is worth it with highly volatile data and/or high input.
[09:55:46] <bassory99> should i use ObjectId as reference key when my database deployment will use replication and sharding ?
[09:55:54] <kurushiyama> bassory99 No
[09:55:56] <be_mtk> why?
[09:56:24] <be_mtk> it's kind of a general rule to fully delegate everything to mongo process?
[09:56:28] <kurushiyama> bassory99 https://docs.mongodb.com/manual/tutorial/choose-a-shard-key/
[09:56:40] <kurushiyama> be_mtk ?!?
[09:57:22] <kurushiyama> be_mtk We are talking of memory mapped files. Yes, it makes good sense to let the application which accesses said mapping to deal with it.
[09:57:44] <bassory99> @kurushiyama thanks! i guessed so because of the logic of creation of ObjectId. Wanted to insert some basic information instead of the objectId, but i’m still confronted to the UPDATEs that will occur on the referenced object
[09:58:04] <be_mtk> Then why OS can't get tuned like that?
[09:58:27] <kurushiyama> be_mtk Well, assume you have a Ferrari
[09:59:09] <kurushiyama> be_mtk Now, your garage wants to add a trailer hitch.
[09:59:43] <kurushiyama> be_mtk Will there be uses? Yeah, in edge cases. Does it make sense? Your call. I do not think so.
[10:00:36] <bassory99> @kurushiyama for the shard key, i wanted to use the objectId
[10:00:46] <kurushiyama> bassory99 See above
[10:01:41] <kurushiyama> bassory99 The thing about ObjectIds is not that they are endorsed or suggested as "_id". They are the _fallback_
[10:02:03] <kurushiyama> bassory99 9 out of 10 times there are better values one could use for _id
[10:02:34] <bassory99> @kurushiyama : you mean as shard key
[10:02:45] <kurushiyama> bassory99 No. as _id
[10:03:08] <kurushiyama> bassory99 With a proper _id, you might use the field as your shard key. But not with ObjectId.
[10:03:31] <bassory99> ok. but then whant is you best strategy for referencing one object into another when you want to use sharding and replica set?
[10:03:58] <kurushiyama> bassory99 Depends on your use case.
[10:04:13] <kurushiyama> bassory99 There is no "one rule that fits them all"
[10:05:06] <bassory99> let’s assume two tables: Parameter (10 records), and Items (+10.000.000 records)
[10:05:35] <bassory99> Parameter is referenced in Items collection
[10:06:04] <bassory99> i want to use an id for this reference
[10:06:30] <bassory99> instead of including all the parameter object in the item object
[10:07:17] <bassory99> from what i understand from your link, i can use my shard_key as reference id
[10:14:59] <kurushiyama> bassory99 That is too much of a commonplace.
[10:15:11] <kurushiyama> bassory99 each of the items can have 10 params?
[10:15:53] <bassory99> each item can have many parameters stored in a list
[10:22:18] <kurushiyama> bassory99 1:many or 1:some
[10:22:48] <bassory99> not sure to get your point
[10:53:18] <kurushiyama> bassory99 Well, depending on how many params there can be per item, you might be better of with embedding.
[10:53:54] <kurushiyama> bassory99 If they are potentially infinite, you are not. But if there will only be a couple of dozens, you _might_.
[10:54:33] <bassory99> but the issue in the embedding way, is that in case the parameter name for instance is updated, i will have to search/update all the +10.000.000 records
[10:55:10] <kurushiyama> bassory99 You would update a param in ALL records?
[10:55:43] <bassory99> not necessary all records, i mean only the items who use the updated parameter
[11:14:21] <beaver> ahah
[11:30:48] <beaver> hello, there is a PPA for Debian Jessie ?
[11:32:36] <beaver> i read it --> https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/
[11:33:02] <beaver> and I find no information for Jessie
[11:36:25] <beaver> I do not want to compile in every update
[11:36:58] <beaver> sorry for my bad english, i'm french guys
[11:46:44] <Derick> you should be able to install the weezie one, or, use the generic binaries
[11:49:52] <beaver> thank you Derick
[11:59:53] <Zelest> o/ php_
[12:02:46] <Derick> Zelest: ? :-)
[12:02:56] <Zelest> uhm?
[12:03:23] <Derick> oh
[12:03:26] <Zelest> but i see now he timedout.. so he's probably afk :)
[12:27:51] <chris|> has anyone seen this issue when restoring from a filesystem snapshot? WiredTiger (-31803) [1465904818:579618][253:0x7f3204294cc0], txn-recover: Recovery failed: WT_NOTFOUND: item not found
[12:30:30] <kurushiyama> chris| Did you stop the mongod for the FS snapshot?
[12:31:07] <chris|> no, should fsync not be sufficient?
[12:31:40] <kurushiyama> chris| fsyncLock you mean?
[12:31:44] <chris|> yes
[12:33:06] <kurushiyama> chris| Well, it used to be with mmapv1... Let me check. Personally, I do shutdown, snapshot, restart, mount snapshot, compress and send to destination, delete snapshot.
[12:33:53] <kurushiyama> chris| Did you just keep the snapshot, by any chance?
[12:34:54] <chris|> no, the snapshot gets deleted after the backup was taken
[12:38:02] <kurushiyama> chris| Ah, which version do you use?
[12:38:28] <chris|> backup was taken on 3.2.5 and restored against both 3.2.5 and 3.2.7 with the same results
[12:39:11] <kurushiyama> chris| Then I hope it is a recovery test only...
[12:39:58] <jokke> hey. i'm monitoring data from db.runCommand({ serverStatus: 1 }). I'm seing vastly different values for opcounters and metrics.document without using any bulk insertion (i'm measuring inserts at this point)
[12:40:03] <chris|> it is, but it would still be good to know if this is recoverable or at least what might have gone wrong, because from my perspective, this backup is broken
[12:40:19] <kurushiyama> As fsyncLock is explicitly good for ensuring backup consistency. What really makes me wonder is that wT does not restore its internal last known good.
[12:40:32] <kurushiyama> chris| I tend to agree.
[12:41:48] <kurushiyama> jokke Can you pastebin and mark the according lines?
[12:43:31] <kurushiyama> chris| What I usually do is to take the tar file I create, create SHA-1 sums for the files in there and compare them to the files in the snapshot. That should not be too complicated.
[12:44:12] <kurushiyama> chris| Depending on the programming language, that should be easy to do with buffered IO and streams.
[12:46:06] <kurushiyama> chris| Or, if you have the space, untar the backup (keeping the tar file, ofc), create the checksums and compare them to the original. That is easy even with a simple shell script.
[12:48:24] <jokke> kurushiyama: here's the mongos output: https://p.jreinert.com/tmt/#n36 shard1: https://p.jreinert.com/FhQgf/#n107 shard2: https://p.jreinert.com/3vJSH/#n103
[12:48:47] <jokke> there are a few secs between the pastes
[12:48:56] <jokke> so increasing numbers is normal
[12:49:05] <jokke> (i'm currently inserting docs)
[12:50:25] <jokke> but for shard2 for example: opcounters.insert: 309933236, metrics.document.inserted: 619866052
[12:51:10] <jokke> its almost exactly doubled
[12:52:37] <kurushiyama> jokke Iirc, opcounters are from uptime, whereas documents.inserted is persistent
[12:56:44] <jokke> kurushiyama: mhm that shouldn't matter, since i'm calculating the delta
[12:56:58] <jokke> between two subsequent outputs
[12:57:17] <jokke> and i still see it doubled
[12:59:38] <jokke> https://p.jreinert.com/m-dqTyki/
[13:00:01] <kurushiyama> context?
[13:00:46] <jokke> left: opcounters.insert right: metrics.document.inserted
[13:01:16] <jokke> always the delta between two subsequent calls
[13:01:54] <jokke> and the graph shows x per second
[13:17:33] <kurushiyama> jokke Sorry, I do not get a grasp on that.
[13:43:36] <shayla> Hi guys, can anyone help with this simple query : http://pastebin.com/rDkGNLmS
[13:43:40] <shayla> I get SyntaxError: Unexpected token {
[13:43:46] <shayla> But I can't understand where i'm wrong :(
[13:48:59] <cheeser> shayla: $and takes an array
[15:24:32] <Forbidd3n> Is it possible to group all documents by a property and merge all sub properties into an array?
[15:26:38] <Forbidd3n> [{'author':'John','books':[{'B1','B2'}]},{'author':'John','books':[{'B3','B4'}]}] and return a result of [{'author':'John','books':[{'B1','B2','B3','B4'}]}]
[15:31:10] <kurushiyama> Forbidd3n Sure. Unwind the original array, then unwind the books array. From there, its is pretty straightforward.
[15:31:23] <Forbidd3n> I think I found a solution with $group
[15:31:27] <Forbidd3n> and aggregate
[15:33:28] <kurushiyama> Forbidd3n [{'author':'John','books':[{'B1','B2'}]},{'author':'John','books':[{'B3','B4'}]}] these are the documents or an array inside a doc?
[15:33:43] <Forbidd3n> that was a sample data snippet
[15:33:49] <Forbidd3n> each are documents
[15:40:43] <kurushiyama> Forbidd3n db.books.aggregate({$unwind:"$books"},{$group:{"_id":"$author","books":{$addToSet:"$books"}}}), for example should do the trick.
[15:47:50] <kurushiyama> Forbidd3n You might want to add a $sort stage if the order of books matters.
[15:48:10] <Forbidd3n> Gotcha. I have it working with just the group, not the unqind
[15:48:12] <Forbidd3n> unwind
[15:50:17] <kurushiyama> Forbidd3n Huh?
[15:50:31] <Forbidd3n> I have it working with just the $group, no need for $unwind
[15:50:35] <kurushiyama> Forbidd3n How would that look like?
[15:55:14] <saml> aggregate({$group:{_id:'$author', books: {$push:{books: '$books'}}}})
[15:56:39] <kurushiyama> saml But that would result in {'author':'John','books':[{'B1','B2'},{'B3','B4'}]} instead of {'author':'John','books':[{'B1','B2','B3','B4'}]}, no?
[15:56:49] <saml> yes
[15:57:04] <kurushiyama> saml And the latter was requested.
[15:57:10] <saml> i know
[15:57:21] <saml> it worked for him for good
[15:58:30] <saml> {'B3','B4'} waht is this in javascript?
[15:59:30] <kurushiyama> saml Good question. was in the original question. Nice spot.
[16:06:49] <Forbidd3n> kurushiyama: sorry I didn't see your last message
[16:06:58] <Forbidd3n> saml showed you
[16:07:27] <Forbidd3n> saml: I was just giving a pseudo sample data
[16:07:45] <kurushiyama> Forbidd3n Np. But sample data matters.
[16:07:50] <saml> thank you for a pseudo sample data
[16:07:52] <saml> for free
[16:08:08] <saml> pseudo free pseudo sample data
[16:08:11] <Forbidd3n> kurushiyama: agree. I will make it coding correct next time
[16:08:13] <kurushiyama> saml made my day!
[16:08:44] <saml> {1, 2, 3} is opening new block, have comma expression (or was it statement) and end block
[16:09:02] <saml> {1, 2, 3} => (1, 2, 3) in temporary block => 3
[16:11:01] <kurushiyama> saml Neither my Greek nor my Chinese is well enough ;)
[16:12:35] <Forbidd3n> saml: here so you will leave me alone about the sample data - :P - [{"author":"John","books":["B1","B2"]},{"author":"John","books":["B3","B4"]}]
[17:02:40] <n1colas> Hello
[17:11:59] <kurushiyama> Hello n1colas
[17:14:46] <jgornick> Hey folks, does anyone know if it's possible to $pull array items out of an array nested inside another array item subdocument?
[17:27:24] <StephenLynx> I am almost sure it is.
[17:27:42] <StephenLynx> however, I personally dislike making models that complex.
[17:28:14] <StephenLynx> because those queries usually are hard to comprehend and you are unable to use a few features when doing that.
[17:28:23] <StephenLynx> rather than having a separate collection.
[17:28:57] <kurushiyama> Aye
[17:29:47] <kurushiyama> StephenLynx Did you hear of that funky Mongoose bug, authenticating remote hosts for each _connection_ ? ;)
[17:30:18] <StephenLynx> kek
[17:30:21] <StephenLynx> no, do tell me.
[17:30:40] <jgornick> For example, here's my data and query with error: https://gist.github.com/jgornick/eace131a44a2b594d51f1b3d9f94e6ca
[17:30:58] <jgornick> I want to remove all answers who's q value is 2.
[17:31:40] <kurushiyama> StephenLynx Well, nothing more to describe. Each time a connection was added to the pool, the remote host was authenticated, again.
[17:32:04] <StephenLynx> jgornick, $ expects the query operator to be there.
[17:32:19] <StephenLynx> so it will give an index for $ to use.
[17:32:29] <StephenLynx> since you have no query, $ can't do anything.
[17:33:23] <StephenLynx> kurushiyama, what does exactly consist of this authentication process?
[17:33:26] <jgornick> StephenLynx: What do you mean no query? Isn't the first argument in the update the query?
[17:33:33] <kurushiyama> StephenLynx SSL authentication?
[17:33:35] <StephenLynx> {}
[17:33:41] <StephenLynx> no query here
[17:33:43] <StephenLynx> kurushiyama, ah.
[17:33:54] <StephenLynx> oh lawdy
[17:34:00] <kurushiyama> StephenLynx _Extremely_ costly ssl authentication.
[17:34:13] <StephenLynx> classic mongoose
[17:34:15] <kurushiyama> And I have seen pool sizes beyond 1k...
[17:34:15] <jgornick> StephenLynx: What would I supply there?
[17:34:25] <kurushiyama> jgornick Well, a query?
[17:34:28] <jgornick> :)
[17:34:37] <StephenLynx> jgornick, the query that will match an element on the array
[17:34:38] <jgornick> I want to update all documents.
[17:34:47] <kurushiyama> jgornick Wrong
[17:34:55] <kurushiyama> jgornick Think again
[17:35:07] <StephenLynx> eh
[17:35:17] <StephenLynx> so you just want to clear the results arrays?
[17:35:29] <StephenLynx> why not just $set:{results:[]} ?
[17:35:43] <kurushiyama> jgornick You want to update all documents for which results.$.answers.q == 2 is true
[17:35:58] <jgornick> I want to remove answers in every documents results array where the answer property of q is 2.
[17:36:04] <StephenLynx> then thats your query
[17:36:09] <jgornick> kurushiyama: That's correct.
[17:36:29] <StephenLynx> results.awnsers:2
[17:36:44] <StephenLynx> answer*
[17:36:56] <jgornick> StephenLynx: results.answers.q:2, right?
[17:37:04] <StephenLynx> ah
[17:37:05] <StephenLynx> true
[17:37:20] <kurushiyama> StephenLynx Can you give me a hand? I am investigating an issue I have with the node driver.
[17:37:29] <StephenLynx> i dunno, honestly, your model is really complex
[17:37:36] <StephenLynx> kurushiyama, sure
[17:37:38] <StephenLynx> what is it
[17:38:39] <kurushiyama> StephenLynx Actually, it seems to open a gazillion conncetions and drops them pretty quickly. However, they seem to have a keepalive set, so as far as I can see, they are just around being dormant.
[17:38:54] <jgornick> kurushiyama and StephenLynx Thank you for the trips, definitely got me going in the right direction.
[17:39:01] <jgornick> s/trips/tips
[17:39:11] <StephenLynx> ill have to see your code.
[17:39:42] <kurushiyama> StephenLynx We are talking in orders of magnitude more conncetions on the client side than there are actually on the server side. Ah, not mine, customer of mine.
[17:39:49] <StephenLynx> welp
[17:39:57] <StephenLynx> RIP
[17:40:02] <kurushiyama> kek
[17:41:23] <jgornick> But I noticed that I had to run the query 2 times. I'm assuming it was because I was only querying for the first answers items that contained q: 2. However, I would like to remove _all_ answers where q is 2. Is my query wrong or the pull?
[17:42:01] <kurushiyama> My first impression was that actually multiple clients were created, which seems to fit the symptoms. But apparently that is not the case.
[17:42:28] <StephenLynx> go figure.
[17:42:31] <kurushiyama> jgornick That is not your query. That is your update.
[17:42:41] <StephenLynx> without looking at the code, I can't image what it is.
[17:42:50] <kurushiyama> StephenLynx Not easy to tell a client "Please do not lie to me".
[17:43:08] <StephenLynx> or "don't be dumb and actually know your shit"
[17:43:28] <StephenLynx> he might be sincere, but if he is incorrect, you still won't be able to actually know.
[17:43:31] <kurushiyama> StephenLynx Well, i would not go that far. I still have to earn some money here and there.
[17:44:17] <kurushiyama> StephenLynx True. Well, we increased the max FDs, so...
[17:44:57] <kurushiyama> jgornick db.foo.update(query,update,options)
[17:45:04] <kurushiyama> jgornick terminology matters.
[17:46:05] <kurushiyama> jgornick I can not help you more at the moment. My MongoDB installation is reinstalled, and I do not pollute prod instances. ;) Will see to it later.
[17:46:08] <jgornick> kurushiyama: Totally agree. I inferred the term "pull" as "updated", my bad.
[17:47:20] <kurushiyama> Jay, after just over 2h, `sudo port upgrade outdated` finished.
[18:04:06] <jgornick> StephenLynx: I don't know if you can help me to the finish line here, but when I ran the new update command, it only modified the first results.answers item and not all of them: https://gist.github.com/jgornick/eace131a44a2b594d51f1b3d9f94e6ca
[18:04:41] <jgornick> StephenLynx: Is there something I'm missing here to pull from all results.answers arrays?
[18:05:09] <StephenLynx> hm
[18:05:26] <StephenLynx> you saying it modified all docs, but only the first element on the array of each doc, is that it?
[18:06:14] <bfig> hello, I'm having an issue importing data. I have a bson file, if I bsondump it, It says '1 objects found' - I assume this means there was no error
[18:06:24] <bfig> but if I use mongorestore I get 16619 error FailedToParse Bad characters in value: offset:17
[18:06:46] <jgornick> StephenLynx: Yes, exactly.
[18:06:55] <StephenLynx> dunno.
[18:07:03] <StephenLynx> it might be a limitation like I told you before.
[18:07:04] <jgornick> StephenLynx: The find results below the update statement reflect what it changed.
[18:07:12] <StephenLynx> $ might not be able to modify all elements.
[18:07:30] <bfig> if I try to use python -m json.tool i get 'no JSON object could be decoded' -
[18:07:34] <StephenLynx> or you might be able to use it in a way it will do what you need.
[18:07:37] <StephenLynx> not sure here.
[18:09:10] <jgornick> StephenLynx: I'm reading this... "the positional $ operator acts as a placeholder for the first element that matches the query document, and"
[18:09:37] <jgornick> ... and also "The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays, because the replacement for the $ placeholder is a single value"
[18:09:46] <jgornick> Meh.
[18:09:47] <StephenLynx> hm yeah
[18:09:56] <StephenLynx> it seems you will need to split into a separate collection
[18:22:40] <jgornick> StephenLynx: Thanks for the help.
[19:09:30] <hyades> Hi. I have a question on index intersection. If I have 3 indexes say {x:1}, {y:1}, and {z:1} and give a match on {x, y, z}. Will mongo use the intersection of these three indexes to return the documents?
[19:10:32] <hyades> I am using 3.2. The docs for 2.6 suggest that at max 2 can be used. But I am unable to figure out if there is any change for 3.2.
[20:13:34] <kurushiyama> hyades I do not think so. At least, I would not bet on that.
[20:15:06] <hyades> kurushiyama: if I have n features and any k of them could be used to form the query, how should I create indexes?
[20:15:36] <kurushiyama> hyades What?
[20:17:32] <hyades> kurushiyama: say I have total 10 fields x1,x2,x3..x10. And a query involves the use of matching on say any number of them. For example it could be on x1,x3,x5 or x3,x4,x7,x8. How do I index my db for such scenarios?
[20:18:09] <kurushiyama> hyades If the fields you query are arbitrary, most likely there is something wrong way earlier.
[20:19:00] <kurushiyama> hyades Could you describe a/your use case?
[20:21:49] <hyades> kurushiyama just a sec.
[20:27:08] <hyades> I have a doc structure like http://paste.ubuntu.com/17339352/ . Now the ones before the empty line are the x's. I need to aggregate on any of them depending on the query, and get the values of the counts (The keys below the empty line)
[20:27:11] <hyades> kurushiyama:
[20:28:47] <hyades> kurushiyama: so my queries could be getting the sum of the counts for say some combination of {name, website, ip}
[20:31:46] <kurushiyama> hyades use a compound index, then. queries for a subset can utilize it.
[20:32:40] <hyades> kurushiyama: I will have to create a lot of compound indexes in that case?
[20:34:21] <kurushiyama> hyades It depends. for your example, create one over {name:1,website:1,ip:1}. If you query for name only, it will be used. if you query for name, website, it will be used. even when you query for name and ip, it will be used. But order matters. if you query for ip, website, it wont.
[20:35:08] <hyades> kurushiyama: exactly. the queries wont exactly match that given pattern in this case
[20:35:39] <kurushiyama> hyades well, every technology has its restrictions.
[20:36:01] <kurushiyama> hyades You could dig into text search, if that is good for you.
[20:38:40] <hyades> kurushiyama: ah well. Thanks a lot!
[21:48:10] <idioglossia> I am designing an event system that logs interactions between some users on a platform
[21:48:19] <idioglossia> does it make the most sense to have each event be a separate document?
[21:48:34] <idioglossia> or have events be grouped into a single document by date
[22:08:19] <jgornick> Hey folks, is it possible to get an index in an embedded array for the first element found based on a query?
[22:09:31] <jgornick> Because I can't remove multiple items in an array, the only way I see I could do this is with $slice, but I need to get the index of an item in the array first. Each item in the array is an object with a date property. I'd like to find the first index for an item in the array that is older than 5 minutes.
[22:20:23] <sector_0> does an operation on one collection block an operation on a different collection, assuming both are in the same database?
[22:21:51] <jayjo_> I'm reading through the configure-ssl docs and am having some trouble. It says pretty verbosely at the beginning "A full description of SSL, PKI, & CAs is beyond the scope of this doc". Does anyone have a guide rec that would get me caught up in this regard, or is this intense study? I've gotten my own certs to work before, I just need a comprehensive guide/overview to tie it all together
[22:22:37] <sector_0> so for example...If I have a collection A and a collection B, does a query operation on B block, if there an ongoing insert operation on A?
[22:27:46] <sector_0> also I need help understanding when to use collections and when to use databases
[22:28:04] <sector_0> at what point should I break data into a separate database?
[23:06:14] <StephenLynx> sector_0, never
[23:06:29] <StephenLynx> unless under very special conditions
[23:06:43] <StephenLynx> or a different system that never interacts with the other system
[23:06:52] <StephenLynx> and its usually good to have them on separate servers
[23:07:26] <StephenLynx> and when you do an action on a collection it won`t lock or anything another collection.
[23:07:34] <StephenLynx> by design mongo doesn`t implement relations.