If you are religious (in the theistic sense, which is really what we’re likely to encounter and what I’m talking about), you believe that there is a divine agent watching over us. This has obvious false implications concerning the singularity.
Suppose you tell a theist that there’s a serious risk that smarter than human AI could wipe out the whole human race.They’ll be thinking “this couldn’t happen, God would prevent it” or “oh, it’s ok, I’ll go to heaven if this happens”. Wherever the argument goes next, you are talking to someone who has such radically different background assumptions to you that you won’t get anything useful out of them.
Why is this differs from most other subjects is that the religious conception of divine intervention is tailored so that it is consistent with our everyday observations. Thus any religious person who is vaguely sane will have some argument as to why God doesn’t prevent earthquakes from killing random people. So God allows small injustices and crimes, but the main point is that everything will be OK in the end, i.e. the ultimate fate of our world is not in question.
The debate concerning the Singularity is directly about this question.
I don’t believe this is a valid thought in this form, or maybe you failed to formalize your intuition enough to communicate it. You list a few specific failure modes, which I don’t believe can cover enough of the theistic people to reduce the probability of a theistic person producing valid singularity thinking down to nothingness. Also, some of these failure modes overlap with related failure modes of non-theistic people, thus not figuring into the likelihood ratio as much as they would otherwise.
There are other failure modes which theists will have disproportionately over atheists, of course. To me it seems that an unerring and (essentially) non-evidence based belief that everything will turn out OK is indictment enough.
Amongst the other failure modes: belief in existence of souls and of the divine place of human intelligence is likely to produce skewed beliefs about the possibility of synthetic intelligence. Various results of dark-side epistemology such as disbelief of evolution, belief in “free will”, belief in original sin and belief in moral realism (“god given morality”) preventing something like CEV. I’ve heard the following fallacious argument against the transhumanist project from a lot of theists: humans are imperfect, so the only way to improve ourselves is to take advice from a perfect being. Imperfection cannot lead to less-imperfection.
Also, I didn’t claim that the average atheist has sensible opinions about the subject. Just that “theist” is a useful filter.
Your conception of “theism”—a tremendously broad concept—is laughably caricatured and narrow, and it pollutes whatever argument you’re trying to make: absolutely none of the logic in the above post follows in the way you think it does.
If you are religious (in the theistic sense, which is really what we’re likely to encounter and what I’m talking about), you believe that there is a divine agent watching over us. This has obvious false implications concerning the singularity.
Suppose you tell a theist that there’s a serious risk that smarter than human AI could wipe out the whole human race.They’ll be thinking “this couldn’t happen, God would prevent it” or “oh, it’s ok, I’ll go to heaven if this happens”. Wherever the argument goes next, you are talking to someone who has such radically different background assumptions to you that you won’t get anything useful out of them.
Why is this differs from most other subjects is that the religious conception of divine intervention is tailored so that it is consistent with our everyday observations. Thus any religious person who is vaguely sane will have some argument as to why God doesn’t prevent earthquakes from killing random people. So God allows small injustices and crimes, but the main point is that everything will be OK in the end, i.e. the ultimate fate of our world is not in question.
The debate concerning the Singularity is directly about this question.
I don’t believe this is a valid thought in this form, or maybe you failed to formalize your intuition enough to communicate it. You list a few specific failure modes, which I don’t believe can cover enough of the theistic people to reduce the probability of a theistic person producing valid singularity thinking down to nothingness. Also, some of these failure modes overlap with related failure modes of non-theistic people, thus not figuring into the likelihood ratio as much as they would otherwise.
There are other failure modes which theists will have disproportionately over atheists, of course. To me it seems that an unerring and (essentially) non-evidence based belief that everything will turn out OK is indictment enough.
Amongst the other failure modes: belief in existence of souls and of the divine place of human intelligence is likely to produce skewed beliefs about the possibility of synthetic intelligence. Various results of dark-side epistemology such as disbelief of evolution, belief in “free will”, belief in original sin and belief in moral realism (“god given morality”) preventing something like CEV. I’ve heard the following fallacious argument against the transhumanist project from a lot of theists: humans are imperfect, so the only way to improve ourselves is to take advice from a perfect being. Imperfection cannot lead to less-imperfection.
Also, I didn’t claim that the average atheist has sensible opinions about the subject. Just that “theist” is a useful filter.
Your conception of “theism”—a tremendously broad concept—is laughably caricatured and narrow, and it pollutes whatever argument you’re trying to make: absolutely none of the logic in the above post follows in the way you think it does.