We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to “regulation”. But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat—a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to “regulation”. But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat—a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.