I think people can in theory collectively decide not to build AGI or ASI.
Certainly you as an individual can choose this!
Where things get tricky is when asking whether that outcome seems probable, or coming up with a plan to bring that outcome about.
Similarly, as a child I wondered, “Why can’t people just choose not to have wars, just decide not to kill each other?”
People have selfish desires, and group loyalty instincts, and limited communication and coordination capacity, and the world is arranged in such a way that sometimes this leads to escalating cycles of group conflict that are net bad for everyone involved.
That’s the scenario I think we are in with AI development also. Everyone would be safer if we didn’t, but getting everyone to agree not to and hold to that agreement even in private seems intractably hard.
In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.
Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.
Another problem is that many people don’t even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they’re racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.
I think people can in theory collectively decide not to build AGI or ASI.
Certainly you as an individual can choose this! Where things get tricky is when asking whether that outcome seems probable, or coming up with a plan to bring that outcome about. Similarly, as a child I wondered, “Why can’t people just choose not to have wars, just decide not to kill each other?”
People have selfish desires, and group loyalty instincts, and limited communication and coordination capacity, and the world is arranged in such a way that sometimes this leads to escalating cycles of group conflict that are net bad for everyone involved.
That’s the scenario I think we are in with AI development also. Everyone would be safer if we didn’t, but getting everyone to agree not to and hold to that agreement even in private seems intractably hard.
[Edit: Here’s a link to Steven Pinker’s writing on the Evolution of War. I don’t think, as he does, that the world is trending strongly towards global peace, but I do think he has some valid insights into the sad lose-lose nature of war.]
In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.
Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.
Another problem is that many people don’t even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they’re racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.