I think that would be a good course of action as well.
But it is difficult to do this. We need to convince at least the following players:
current market-based companies
future market-based companies
some guy with a vision and with enough computing power / money as a market-based company
various states around the world with an interest in building new weapons
Now, we might pull this off. But the last group is extremely difficult to convince/change. China, for example, really needs to be assured that there aren’t any secrets projects in the west creating a WeaponsBot before they try to limit their research. And vice versa, for all the various countries out there.
But, more importantly, you can do two things at once. And doing one of them, as part of a movement to reduce the overall risks of any existential-risk, can probably help the first.
Now, how to convince maybe 1.6 billion individuals along with their states not to produce an AGI, at least for the next 50-50,000 years?
My (probably very naive) hope is that it is possible to gain a common understanding that building an uncontrollable AI is just incredibly stupid, and also an understanding of what “uncontrollable” means exactly (see https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk). We know that going into the woods, picking up the first unknown mushrooms we find, and eating them for dinner is a bad idea, as is letting your children play on the highway or taking horse medicine against Covid. There may still be people stupid enough to do it anyway, but hopefully, those are not running a leading AI lab.
The difficulty lies in gaining this common understanding of what exactly we shouldn’t do, and why. If we had that, I think the problem would be solvable in principle, because it is relatively easy to coordinate people into “agreeing to not unilaterally destroy the world”. But as long as people think they can get away with building an AGI and get insanely rich and famous in the process, they’ll do the stupid thing. I doubt that this post will help much in that case, but maybe it’s worth a try.
I think that would be a good course of action as well.
But it is difficult to do this. We need to convince at least the following players:
current market-based companies
future market-based companies
some guy with a vision and with enough computing power / money as a market-based company
various states around the world with an interest in building new weapons
Now, we might pull this off. But the last group is extremely difficult to convince/change. China, for example, really needs to be assured that there aren’t any secrets projects in the west creating a WeaponsBot before they try to limit their research. And vice versa, for all the various countries out there.
But, more importantly, you can do two things at once. And doing one of them, as part of a movement to reduce the overall risks of any existential-risk, can probably help the first.
Now, how to convince maybe 1.6 billion individuals along with their states not to produce an AGI, at least for the next 50-50,000 years?
My (probably very naive) hope is that it is possible to gain a common understanding that building an uncontrollable AI is just incredibly stupid, and also an understanding of what “uncontrollable” means exactly (see https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk). We know that going into the woods, picking up the first unknown mushrooms we find, and eating them for dinner is a bad idea, as is letting your children play on the highway or taking horse medicine against Covid. There may still be people stupid enough to do it anyway, but hopefully, those are not running a leading AI lab.
The difficulty lies in gaining this common understanding of what exactly we shouldn’t do, and why. If we had that, I think the problem would be solvable in principle, because it is relatively easy to coordinate people into “agreeing to not unilaterally destroy the world”. But as long as people think they can get away with building an AGI and get insanely rich and famous in the process, they’ll do the stupid thing. I doubt that this post will help much in that case, but maybe it’s worth a try.