The argument is that people who talk about the singularity in general or AI risk (the hard-takeoff FOOM scenario) are privileging some low-probability hypotheses based on intuitions that come either directly from religion or from some underlying psychological mechanisms that also generate religious beliefs.
Most beliefs of this kind are wrong. They tend to be unparsimonious. Hence, when presented with a claim of this kind, before we look at the evidence or specific arguments, we should infer at first that the claim is likely wrong. Strong evidence or strong arguments would “screen off” this effect, while lack of evidence or weak arguments based on subjective estimates would not.
The argument is that people who talk about the singularity in general or AI risk (the hard-takeoff FOOM scenario) are privileging some low-probability hypotheses based on intuitions that come either directly from religion or from some underlying psychological mechanisms that also generate religious beliefs.
Most beliefs of this kind are wrong. They tend to be unparsimonious. Hence, when presented with a claim of this kind, before we look at the evidence or specific arguments, we should infer at first that the claim is likely wrong. Strong evidence or strong arguments would “screen off” this effect, while lack of evidence or weak arguments based on subjective estimates would not.