The key point that I think you’re missing here is that evaluating whether such a policy “should” be implemented necessarily depends on how it would be implemented.
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s). But then of course we need to think about the side effects of such a program, ya know, like them running and hiding in other countries and dedicating their lives to fighting back against the countries that are hunting them. Or whatever.
That’s just one example, and I use it because it might be the only tractable way to stop this form of tech progress: literally wiping out the knowledge base.
I do not endorse this idea, by the way.
I’m just trying to show that your reaction to “should we” depends hugely on “how.”
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to “regulation”. But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat—a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.
The key point that I think you’re missing here is that evaluating whether such a policy “should” be implemented necessarily depends on how it would be implemented.
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s). But then of course we need to think about the side effects of such a program, ya know, like them running and hiding in other countries and dedicating their lives to fighting back against the countries that are hunting them. Or whatever.
That’s just one example, and I use it because it might be the only tractable way to stop this form of tech progress: literally wiping out the knowledge base.
I do not endorse this idea, by the way.
I’m just trying to show that your reaction to “should we” depends hugely on “how.”
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to “regulation”. But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat—a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.