I think we share the same perspective on this issue. My main point regarding the existence/non-existence of a god was that one cannot say with a P of 1 that there is certainly no god. In fact, such an assertion seems to me to be absurd. However, given other evidence, we can have very low confidence in the existence of a god and very high confidence in the non-existence of a god.
However, as a scientist and philosopher of science, I cannot accept that missing evidence infers any one alternative hypothesis. This was one of the many criticisms of Karl Popper’s falsification theory: finding evidence that says research program T is not true may imply that not-T is true, may imply there is an auxiliary hypothesis in T that needs to be adjusted but that the other assumptions surrounding that theory are still acceptable, etc. If T is false and not-T is true, there is no immediately obvious standard by which to choose which of the theoretically infinite alternative theories is true.
This is getting more theoretical, though, and does not really apply to a binary problem like existence/nonexistence of god, so I’m afraid I’ve gone off on a bit of a tangent here. At base, I agree with you that the non-existence of god seems to have a probability very close to 1. However, it is not 1, and I would be loathe to say it is close enough to 1 for the difference to be “negligible.” If your probability is 1 (or 0), then it is 1 (or 0). If it is close to but not quite 1 (or 0), you are not justified in making an absolute statement.
As Lakatos wrote, it is not irrational to continue working within a degenerating research program, for such programs have been seen historically to have comebacks when new evidence is discovered. Personally, however, I’d place my money on the non-existence of god (which seems, to me, to be the progressive research program).
It’s a truism around here that you can’t say anything with P=1, on pain of being unable to subsequently change your mind given new evidence. Here’s a post along those lines.
Agreed that missing evidence doesn’t privilege any single alternative hypothesis, except in cases of strictly binary propositions. However, insofar as T2 and T1 are relevantly similar, events that lower my confidence in T1 will lower my confidence in T2 as well, so missing evidence can legitimately anti-privilege entire classes of explanation. That said, it’s important not to generalize over relevant dis-similarities between T2 and T1.
As far as “negligible”… well, enough “missing” evidence causes my confidence in a proposition to drop to a point where the expected value of behaving as though it were true is lower than the expected value of behaving as though it were false. For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
All right, I think I concede your point. (Not to say I will stop thinking about this issue, of course—have to be in a constant state of “crisis of belief” &c.) I also think we agree fundamentally about a great many of these points you made in this comment to begin with and perhaps I did not verbalize them coherently—such as “behaving for all practical purposes as if a given T were true” and so on. The majority of your last paragraph is new to me, however. Thanks.
For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
The inverted Pascal’s Wager.
or
Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program.
I think we share the same perspective on this issue. My main point regarding the existence/non-existence of a god was that one cannot say with a P of 1 that there is certainly no god. In fact, such an assertion seems to me to be absurd. However, given other evidence, we can have very low confidence in the existence of a god and very high confidence in the non-existence of a god.
However, as a scientist and philosopher of science, I cannot accept that missing evidence infers any one alternative hypothesis. This was one of the many criticisms of Karl Popper’s falsification theory: finding evidence that says research program T is not true may imply that not-T is true, may imply there is an auxiliary hypothesis in T that needs to be adjusted but that the other assumptions surrounding that theory are still acceptable, etc. If T is false and not-T is true, there is no immediately obvious standard by which to choose which of the theoretically infinite alternative theories is true.
This is getting more theoretical, though, and does not really apply to a binary problem like existence/nonexistence of god, so I’m afraid I’ve gone off on a bit of a tangent here. At base, I agree with you that the non-existence of god seems to have a probability very close to 1. However, it is not 1, and I would be loathe to say it is close enough to 1 for the difference to be “negligible.” If your probability is 1 (or 0), then it is 1 (or 0). If it is close to but not quite 1 (or 0), you are not justified in making an absolute statement.
As Lakatos wrote, it is not irrational to continue working within a degenerating research program, for such programs have been seen historically to have comebacks when new evidence is discovered. Personally, however, I’d place my money on the non-existence of god (which seems, to me, to be the progressive research program).
It’s a truism around here that you can’t say anything with P=1, on pain of being unable to subsequently change your mind given new evidence. Here’s a post along those lines.
Agreed that missing evidence doesn’t privilege any single alternative hypothesis, except in cases of strictly binary propositions. However, insofar as T2 and T1 are relevantly similar, events that lower my confidence in T1 will lower my confidence in T2 as well, so missing evidence can legitimately anti-privilege entire classes of explanation. That said, it’s important not to generalize over relevant dis-similarities between T2 and T1.
As far as “negligible”… well, enough “missing” evidence causes my confidence in a proposition to drop to a point where the expected value of behaving as though it were true is lower than the expected value of behaving as though it were false. For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
All right, I think I concede your point. (Not to say I will stop thinking about this issue, of course—have to be in a constant state of “crisis of belief” &c.) I also think we agree fundamentally about a great many of these points you made in this comment to begin with and perhaps I did not verbalize them coherently—such as “behaving for all practical purposes as if a given T were true” and so on. The majority of your last paragraph is new to me, however. Thanks.
The inverted Pascal’s Wager.
or