These next changes implemented in the US, Europe and East Asia would probably buy us many decades:
Close all the AI labs and return their assets to their shareholders;
Require all “experts” (e.g., researchers, instructors) in AI to leave their jobs; give them money to compensate them for their temporary loss of earnings power;
Make it illegal to communicate technical knowledge about machine learning or AI; this includes publishing papers, engaging in informal conversations, tutoring, talking about it in a classroom; even distributing already-published titles on the subject gets banned.
Of course it is impractical to completely stop these activities (especially the distribution of already-published titles), but we do not have to completely stop them; we need only sufficiently reduce the rate at which the AI community worldwide produces algorithmic improvements. Here we are helped by the fact that figuring out how to create an AI capable of killing us all is probably still a very hard research problem.
What is most dangerous about the current situation is the tens of thousands of researchers world-wide with tens of billions in funding who feel perfectly free to communicate and collaborate with each other and who expect that they will be praised and rewarded for increasing our society’s ability to create powerful AIs. If instead they come to expect more criticism than praise and more punishment than reward, most of them will stop—and more importantly almost no young person is going to put in the years of hard work needed to become an AI researcher.
I know how awful this sounds to many of the people reading this, including the person I am replying to, but you did ask, “Is there some other policy target which would somehow buy a lot more time?”
I know how awful this sounds to many of the people reading this, including the person I am replying to...
I actually find this kind of thinking quite useful. I mean, the particular policies proposed are probably pareto-suboptimal, but there’s a sound method in which we first ask “what policies would buy a lot more time?”, allowing for pretty bad policies as a first pass, and then think through how to achieve the same subgoals in more palatable ways.
These next changes implemented in the US, Europe and East Asia would probably buy us many decades:
Close all the AI labs and return their assets to their shareholders;
Require all “experts” (e.g., researchers, instructors) in AI to leave their jobs; give them money to compensate them for their temporary loss of earnings power;
Make it illegal to communicate technical knowledge about machine learning or AI; this includes publishing papers, engaging in informal conversations, tutoring, talking about it in a classroom; even distributing already-published titles on the subject gets banned.
Of course it is impractical to completely stop these activities (especially the distribution of already-published titles), but we do not have to completely stop them; we need only sufficiently reduce the rate at which the AI community worldwide produces algorithmic improvements. Here we are helped by the fact that figuring out how to create an AI capable of killing us all is probably still a very hard research problem.
What is most dangerous about the current situation is the tens of thousands of researchers world-wide with tens of billions in funding who feel perfectly free to communicate and collaborate with each other and who expect that they will be praised and rewarded for increasing our society’s ability to create powerful AIs. If instead they come to expect more criticism than praise and more punishment than reward, most of them will stop—and more importantly almost no young person is going to put in the years of hard work needed to become an AI researcher.
I know how awful this sounds to many of the people reading this, including the person I am replying to, but you did ask, “Is there some other policy target which would somehow buy a lot more time?”
I actually find this kind of thinking quite useful. I mean, the particular policies proposed are probably pareto-suboptimal, but there’s a sound method in which we first ask “what policies would buy a lot more time?”, allowing for pretty bad policies as a first pass, and then think through how to achieve the same subgoals in more palatable ways.
>I actually find this kind of thinking quite useful
I’m glad.