In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.
Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.
Another problem is that many people don’t even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they’re racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.
In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.
Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.
Another problem is that many people don’t even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they’re racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.