I absolutely agree with this point. Rationality in this sense is that truth-engine I named in the comment you replied to: it’s built for a range of possible environments, but can fail in case of an unfortunate happenstance. As opposed to having an insane maintainer who is convinced that the engine works when in fact it doesn’t, not just on the actual test runs, but on the range of possible environments for which it’s supposedly built. When you are 90% sure that something will happen, you expect it NOT to happen 1 time in 10.
I absolutely agree with this point. Rationality in this sense is that truth-engine I named in the comment you replied to: it’s built for a range of possible environments, but can fail in case of an unfortunate happenstance. As opposed to having an insane maintainer who is convinced that the engine works when in fact it doesn’t, not just on the actual test runs, but on the range of possible environments for which it’s supposedly built. When you are 90% sure that something will happen, you expect it NOT to happen 1 time in 10.