OK, I might have initially misinterpreted your wording as saying that the only group capable of forcibly halting rival AI projects is an AI project capable of producing aligned AGI, whereas you were only claiming that the only AI project capable of forcibly halting rival AI projects in an AI project capable of producing aligned AGI.
Still, it is definitely possible to imagine arrangements such as an AI project working closely with one or more governments, or a general project that develops narrow AI in addition to other technology such as cognitive enhancement, that would have a shot at pulling this off. One of the main relevant questions is whether doing this is easier or harder than solving AGI alignment. In any case we probably have some disagreement about the difficulty of a group of less than 100,000 people (i.e. not larger than current big tech companies) developing a large technological advantage over the rest of the world without developing AGI.
“Organization working on AI” vs “any other kind of organization” is not the important point. The important point is ALL. We are talking about a hypothetical organization capable of shutting down ALL artificial intelligence projects that it does not like, no matter where on earth they are. Alicorn kindly gives us an example of what she’s talking about: “destroy all the GPUs on the planet and prevent the manufacture of new ones”.
Just consider China, Russia, and America. China and America lead everyone else in machine learning; Russia has plenty of human capital and has carefully preserved its ability to not be pushed around by America. What do you envisage—the three of them agree to establish a single research entity, that shall be the only one in the world working on AI near a singularity threshold, and they agree not to have any domestic projects independent of this joint research group, and they agree to work to suppress rival groups throughout the world?
Despite your remarks about how the NSA could easily become the hub of a surveillance state tailored to this purpose, I greatly doubt the ability of NSA++ to successfully suppress all rival AI work even within America and throughout the American sphere of influence. They could try, they could have limited success—or they could run up against the limits of their power. Tech companies, rival agencies, coalitions of university researchers, other governments, they can all join forces to interfere.
In my opinion, the most constructive approach to the fact that there are necessarily multiple contenders in the race towards superhuman intelligence, is to seek intellectual consensus on important points. The technicians who maintain the world’s nuclear arsenals agree on the basics of nuclear physics. The programmers who maintain the world’s search engines agree on numerous aspects of the theory of algorithms. My objective here would be that the people who are working in proximity to the creation of superhuman intelligence, develop some shared technical understandings about the potential consequences of what they are doing, and about the initial conditions likely to produce a desirable rather than an undesirable outcome.
Let’s zoom in to the NSA++ case, since that continues to be a point of disagreement. Do you think that, if the US government banned GPUs within US borders that have above a certain level of performance (outside a few high-security government projects), and most relevant people agreed that this ban was a good idea, that it would not be possible for NSA++ to enforce this ban? The number of GPU manufacturers in the US is pretty low.
Banning high-end GPUs so that only the government can have AI? They could do it, they might feel compelled to do something like it, but there would be serious resistance and moments of sheer pandemonium. They can say it’s to protect humanity, but to millions of people it will look like the final step in the enslavement of humanity.
Leaving aside the question of shutting down all rival AI projects, if indeed NSA++ can regulate high-end GPUs in the US, international regulation of GPUs such that only a handful of projects can run large experiments seems doable through international agreements, soft power, and covert warfare. This seems similar to international regulation of nuclear weapons or CFCs. (I am not claiming this is not hard, just that it is possible and not a fantasy)
At this point, similar to what you suggested, the better solution would be to have the people working at all of these projects know about what things are likely to be dangerous, and avoid those things (of course, this means there have to be few enough projects that it’s unlikely for a single bad actor to destroy the world). The question of shutting down all other projects is moot at this point, given that it’s unnecessary and it’s not clear where the will to do this would come from. And if the projects coordinate successfully, it’s similar to there being only one project. (I do think it is possible to shut down the other projects by force with a sufficient technical advantage, but it would have a substantial chance of triggering World War 3; realistically, this is also the case for applications of task-based tipping point AI, for exactly the same reasons)
OK, I might have initially misinterpreted your wording as saying that the only group capable of forcibly halting rival AI projects is an AI project capable of producing aligned AGI, whereas you were only claiming that the only AI project capable of forcibly halting rival AI projects in an AI project capable of producing aligned AGI.
Still, it is definitely possible to imagine arrangements such as an AI project working closely with one or more governments, or a general project that develops narrow AI in addition to other technology such as cognitive enhancement, that would have a shot at pulling this off. One of the main relevant questions is whether doing this is easier or harder than solving AGI alignment. In any case we probably have some disagreement about the difficulty of a group of less than 100,000 people (i.e. not larger than current big tech companies) developing a large technological advantage over the rest of the world without developing AGI.
“Organization working on AI” vs “any other kind of organization” is not the important point. The important point is ALL. We are talking about a hypothetical organization capable of shutting down ALL artificial intelligence projects that it does not like, no matter where on earth they are. Alicorn kindly gives us an example of what she’s talking about: “destroy all the GPUs on the planet and prevent the manufacture of new ones”.
Just consider China, Russia, and America. China and America lead everyone else in machine learning; Russia has plenty of human capital and has carefully preserved its ability to not be pushed around by America. What do you envisage—the three of them agree to establish a single research entity, that shall be the only one in the world working on AI near a singularity threshold, and they agree not to have any domestic projects independent of this joint research group, and they agree to work to suppress rival groups throughout the world?
Despite your remarks about how the NSA could easily become the hub of a surveillance state tailored to this purpose, I greatly doubt the ability of NSA++ to successfully suppress all rival AI work even within America and throughout the American sphere of influence. They could try, they could have limited success—or they could run up against the limits of their power. Tech companies, rival agencies, coalitions of university researchers, other governments, they can all join forces to interfere.
In my opinion, the most constructive approach to the fact that there are necessarily multiple contenders in the race towards superhuman intelligence, is to seek intellectual consensus on important points. The technicians who maintain the world’s nuclear arsenals agree on the basics of nuclear physics. The programmers who maintain the world’s search engines agree on numerous aspects of the theory of algorithms. My objective here would be that the people who are working in proximity to the creation of superhuman intelligence, develop some shared technical understandings about the potential consequences of what they are doing, and about the initial conditions likely to produce a desirable rather than an undesirable outcome.
Let’s zoom in to the NSA++ case, since that continues to be a point of disagreement. Do you think that, if the US government banned GPUs within US borders that have above a certain level of performance (outside a few high-security government projects), and most relevant people agreed that this ban was a good idea, that it would not be possible for NSA++ to enforce this ban? The number of GPU manufacturers in the US is pretty low.
Banning high-end GPUs so that only the government can have AI? They could do it, they might feel compelled to do something like it, but there would be serious resistance and moments of sheer pandemonium. They can say it’s to protect humanity, but to millions of people it will look like the final step in the enslavement of humanity.
Leaving aside the question of shutting down all rival AI projects, if indeed NSA++ can regulate high-end GPUs in the US, international regulation of GPUs such that only a handful of projects can run large experiments seems doable through international agreements, soft power, and covert warfare. This seems similar to international regulation of nuclear weapons or CFCs. (I am not claiming this is not hard, just that it is possible and not a fantasy)
At this point, similar to what you suggested, the better solution would be to have the people working at all of these projects know about what things are likely to be dangerous, and avoid those things (of course, this means there have to be few enough projects that it’s unlikely for a single bad actor to destroy the world). The question of shutting down all other projects is moot at this point, given that it’s unnecessary and it’s not clear where the will to do this would come from. And if the projects coordinate successfully, it’s similar to there being only one project. (I do think it is possible to shut down the other projects by force with a sufficient technical advantage, but it would have a substantial chance of triggering World War 3; realistically, this is also the case for applications of task-based tipping point AI, for exactly the same reasons)