Let’s zoom in to the NSA++ case, since that continues to be a point of disagreement. Do you think that, if the US government banned GPUs within US borders that have above a certain level of performance (outside a few high-security government projects), and most relevant people agreed that this ban was a good idea, that it would not be possible for NSA++ to enforce this ban? The number of GPU manufacturers in the US is pretty low.
Banning high-end GPUs so that only the government can have AI? They could do it, they might feel compelled to do something like it, but there would be serious resistance and moments of sheer pandemonium. They can say it’s to protect humanity, but to millions of people it will look like the final step in the enslavement of humanity.
Leaving aside the question of shutting down all rival AI projects, if indeed NSA++ can regulate high-end GPUs in the US, international regulation of GPUs such that only a handful of projects can run large experiments seems doable through international agreements, soft power, and covert warfare. This seems similar to international regulation of nuclear weapons or CFCs. (I am not claiming this is not hard, just that it is possible and not a fantasy)
At this point, similar to what you suggested, the better solution would be to have the people working at all of these projects know about what things are likely to be dangerous, and avoid those things (of course, this means there have to be few enough projects that it’s unlikely for a single bad actor to destroy the world). The question of shutting down all other projects is moot at this point, given that it’s unnecessary and it’s not clear where the will to do this would come from. And if the projects coordinate successfully, it’s similar to there being only one project. (I do think it is possible to shut down the other projects by force with a sufficient technical advantage, but it would have a substantial chance of triggering World War 3; realistically, this is also the case for applications of task-based tipping point AI, for exactly the same reasons)
Let’s zoom in to the NSA++ case, since that continues to be a point of disagreement. Do you think that, if the US government banned GPUs within US borders that have above a certain level of performance (outside a few high-security government projects), and most relevant people agreed that this ban was a good idea, that it would not be possible for NSA++ to enforce this ban? The number of GPU manufacturers in the US is pretty low.
Banning high-end GPUs so that only the government can have AI? They could do it, they might feel compelled to do something like it, but there would be serious resistance and moments of sheer pandemonium. They can say it’s to protect humanity, but to millions of people it will look like the final step in the enslavement of humanity.
Leaving aside the question of shutting down all rival AI projects, if indeed NSA++ can regulate high-end GPUs in the US, international regulation of GPUs such that only a handful of projects can run large experiments seems doable through international agreements, soft power, and covert warfare. This seems similar to international regulation of nuclear weapons or CFCs. (I am not claiming this is not hard, just that it is possible and not a fantasy)
At this point, similar to what you suggested, the better solution would be to have the people working at all of these projects know about what things are likely to be dangerous, and avoid those things (of course, this means there have to be few enough projects that it’s unlikely for a single bad actor to destroy the world). The question of shutting down all other projects is moot at this point, given that it’s unnecessary and it’s not clear where the will to do this would come from. And if the projects coordinate successfully, it’s similar to there being only one project. (I do think it is possible to shut down the other projects by force with a sufficient technical advantage, but it would have a substantial chance of triggering World War 3; realistically, this is also the case for applications of task-based tipping point AI, for exactly the same reasons)