It’s a truism around here that you can’t say anything with P=1, on pain of being unable to subsequently change your mind given new evidence. Here’s a post along those lines.
Agreed that missing evidence doesn’t privilege any single alternative hypothesis, except in cases of strictly binary propositions. However, insofar as T2 and T1 are relevantly similar, events that lower my confidence in T1 will lower my confidence in T2 as well, so missing evidence can legitimately anti-privilege entire classes of explanation. That said, it’s important not to generalize over relevant dis-similarities between T2 and T1.
As far as “negligible”… well, enough “missing” evidence causes my confidence in a proposition to drop to a point where the expected value of behaving as though it were true is lower than the expected value of behaving as though it were false. For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
All right, I think I concede your point. (Not to say I will stop thinking about this issue, of course—have to be in a constant state of “crisis of belief” &c.) I also think we agree fundamentally about a great many of these points you made in this comment to begin with and perhaps I did not verbalize them coherently—such as “behaving for all practical purposes as if a given T were true” and so on. The majority of your last paragraph is new to me, however. Thanks.
For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
The inverted Pascal’s Wager.
or
Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program.
It’s a truism around here that you can’t say anything with P=1, on pain of being unable to subsequently change your mind given new evidence. Here’s a post along those lines.
Agreed that missing evidence doesn’t privilege any single alternative hypothesis, except in cases of strictly binary propositions. However, insofar as T2 and T1 are relevantly similar, events that lower my confidence in T1 will lower my confidence in T2 as well, so missing evidence can legitimately anti-privilege entire classes of explanation. That said, it’s important not to generalize over relevant dis-similarities between T2 and T1.
As far as “negligible”… well, enough “missing” evidence causes my confidence in a proposition to drop to a point where the expected value of behaving as though it were true is lower than the expected value of behaving as though it were false. For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
All right, I think I concede your point. (Not to say I will stop thinking about this issue, of course—have to be in a constant state of “crisis of belief” &c.) I also think we agree fundamentally about a great many of these points you made in this comment to begin with and perhaps I did not verbalize them coherently—such as “behaving for all practical purposes as if a given T were true” and so on. The majority of your last paragraph is new to me, however. Thanks.
The inverted Pascal’s Wager.
or