To be blunt, I’d argue selection effects plus vested interests in AGI happening would distressingly explain a large portion of the question.
(A weaker version of this applies to the opposite question “Why do AI Safety people have high probability of doom estimates?” There selection bias would account for at least a non-trivial portion of the reason this is true.)
To be blunt, I’d argue selection effects plus vested interests in AGI happening would distressingly explain a large portion of the question.
(A weaker version of this applies to the opposite question “Why do AI Safety people have high probability of doom estimates?” There selection bias would account for at least a non-trivial portion of the reason this is true.)