The tricky thing with human politics is that governments will still fund research into very dangerous technology if it has the potential to grant them a decisive advantage on the world stage.
No one wants nuclear war, but everyone wants nukes, even (or especially) after their destructive potential has been demonstrated. No one wants AGI to destroy the world, but everyone will want an AGI that can outthink their enemies, even (or especially) after its power has been demonstrated.
The goal, of course, is to figure out alignment before the first metaphorical (or literal) bomb goes off.
On that note, the main way I could envision AI being really destructive is getting access to a government’s nuclear arsenal. Otherwise, it’s extremely resourceful but still trapped in an electronic medium; the most it could do if it really wanted to cause damage is destroy the power grid (which would destroy it too).
The tricky thing with human politics is that governments will still fund research into very dangerous technology if it has the potential to grant them a decisive advantage on the world stage.
No one wants nuclear war, but everyone wants nukes, even (or especially) after their destructive potential has been demonstrated. No one wants AGI to destroy the world, but everyone will want an AGI that can outthink their enemies, even (or especially) after its power has been demonstrated.
The goal, of course, is to figure out alignment before the first metaphorical (or literal) bomb goes off.
On that note, the main way I could envision AI being really destructive is getting access to a government’s nuclear arsenal. Otherwise, it’s extremely resourceful but still trapped in an electronic medium; the most it could do if it really wanted to cause damage is destroy the power grid (which would destroy it too).
you’re underestimating biology