I think that’s true of people like: Steven Pinker and Neil deGrasse Tyson. They’re intelligent but clearly haven’t engaged with the core arguments because they’re saying stuff like “just unplug it” and “why would it be evil?”
But there’s also people like...
Robin Hanson. I don’t really agree with him but he is engaging with the AI risk arguments, has thought about it a lot and is a clever guy.
Will MacAskill. One of the most thoughtful thinkers I know of, who I’m pretty confident will have engaged seriously with the AI Risk arguments. His p(doom) is far lower than Eliezer’s. I think he says 3% in What We Owe The Future.
Other AI Alignment experts who are optimistic about our chances of solving alignment and put p(doom) lower (I don’t know enough about the field to name people.)
And I guess I am reserving some small amount of probability for “most of the world’s most intelligent computer scientists, physicists, mathematicians aren’t worried about AI Risk, could I be missing something?” My intuitions from playing around on prediction markets is you have to adjust your bets slightly for those kind of considerations.
Robin Hanson is weird. He paints a picture of a grim future where all nice human values are eroded away, replaced with endless frontier replicators optimized and optimizing only for more replication. And then he just accepts it as if that was fine.
Will Macaskill seems to think AI risk is real. He just thinks alignment is easy. He has a specific proposal involving making anthropomorphic AI and raising it like a human child that he seems keen on.
I think that’s true of people like: Steven Pinker and Neil deGrasse Tyson. They’re intelligent but clearly haven’t engaged with the core arguments because they’re saying stuff like “just unplug it” and “why would it be evil?”
But there’s also people like...
Robin Hanson. I don’t really agree with him but he is engaging with the AI risk arguments, has thought about it a lot and is a clever guy.
Will MacAskill. One of the most thoughtful thinkers I know of, who I’m pretty confident will have engaged seriously with the AI Risk arguments. His p(doom) is far lower than Eliezer’s. I think he says 3% in What We Owe The Future.
Other AI Alignment experts who are optimistic about our chances of solving alignment and put p(doom) lower (I don’t know enough about the field to name people.)
And I guess I am reserving some small amount of probability for “most of the world’s most intelligent computer scientists, physicists, mathematicians aren’t worried about AI Risk, could I be missing something?” My intuitions from playing around on prediction markets is you have to adjust your bets slightly for those kind of considerations.
Robin Hanson is weird. He paints a picture of a grim future where all nice human values are eroded away, replaced with endless frontier replicators optimized and optimizing only for more replication. And then he just accepts it as if that was fine.
Will Macaskill seems to think AI risk is real. He just thinks alignment is easy. He has a specific proposal involving making anthropomorphic AI and raising it like a human child that he seems keen on.