My (probably very naive) hope is that it is possible to gain a common understanding that building an uncontrollable AI is just incredibly stupid, and also an understanding of what “uncontrollable” means exactly (see https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk). We know that going into the woods, picking up the first unknown mushrooms we find, and eating them for dinner is a bad idea, as is letting your children play on the highway or taking horse medicine against Covid. There may still be people stupid enough to do it anyway, but hopefully, those are not running a leading AI lab.
The difficulty lies in gaining this common understanding of what exactly we shouldn’t do, and why. If we had that, I think the problem would be solvable in principle, because it is relatively easy to coordinate people into “agreeing to not unilaterally destroy the world”. But as long as people think they can get away with building an AGI and get insanely rich and famous in the process, they’ll do the stupid thing. I doubt that this post will help much in that case, but maybe it’s worth a try.
My (probably very naive) hope is that it is possible to gain a common understanding that building an uncontrollable AI is just incredibly stupid, and also an understanding of what “uncontrollable” means exactly (see https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk). We know that going into the woods, picking up the first unknown mushrooms we find, and eating them for dinner is a bad idea, as is letting your children play on the highway or taking horse medicine against Covid. There may still be people stupid enough to do it anyway, but hopefully, those are not running a leading AI lab.
The difficulty lies in gaining this common understanding of what exactly we shouldn’t do, and why. If we had that, I think the problem would be solvable in principle, because it is relatively easy to coordinate people into “agreeing to not unilaterally destroy the world”. But as long as people think they can get away with building an AGI and get insanely rich and famous in the process, they’ll do the stupid thing. I doubt that this post will help much in that case, but maybe it’s worth a try.