I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?