As I said offline to Aryeh, in my mind, this is another example of people agreeing on most of the object level questions. For example, Etzioni’s AI timelines overlap with most of the “alarmists,” but (I assume) he’s predicting the mean, not the worst case or 95% confidence interval for AI arrival. And yes, he disagrees with Eliezer on timelines, but so do most others in the alarmist camp—and he’s not far away from what surveys suggest is the consensus view.
He disagrees about planning the path forward, mostly due to value differences. For example, he doesn’t buy the argument that most of the Effective Altruism / Lesswrong community has suggested that existential risk is a higher priority than almost anything near-term. He also clearly worries much more about over-regulation cutting off AI benefits.
As I said offline to Aryeh, in my mind, this is another example of people agreeing on most of the object level questions. For example, Etzioni’s AI timelines overlap with most of the “alarmists,” but (I assume) he’s predicting the mean, not the worst case or 95% confidence interval for AI arrival. And yes, he disagrees with Eliezer on timelines, but so do most others in the alarmist camp—and he’s not far away from what surveys suggest is the consensus view.
He disagrees about planning the path forward, mostly due to value differences. For example, he doesn’t buy the argument that most of the Effective Altruism / Lesswrong community has suggested that existential risk is a higher priority than almost anything near-term. He also clearly worries much more about over-regulation cutting off AI benefits.