Part of the reason is that Yudkowsky radicalized his position to stay out of the overton window. Fifteen years ago, his position was “we need to do research into AI safety, because AI will pose a threat to humanity some time this century”. Now, the latter is becoming mainstream-adjacent, but he shifted to “it’s too late to do research, we need to stop all capability work or else we all die in 10-15 years”. And, “even if we stop all capability work as much as an international treaty can conceivably accomplish, we must augment human intelligence in adults in order to be able to solve the problem in time.”
Part of the reason is that Yudkowsky radicalized his position to stay out of the overton window. Fifteen years ago, his position was “we need to do research into AI safety, because AI will pose a threat to humanity some time this century”. Now, the latter is becoming mainstream-adjacent, but he shifted to “it’s too late to do research, we need to stop all capability work or else we all die in 10-15 years”. And, “even if we stop all capability work as much as an international treaty can conceivably accomplish, we must augment human intelligence in adults in order to be able to solve the problem in time.”