There are probably a lot of reasons, but a big one is that it probably wouldn’t help much to slow down progress toward AGI (no one individual is that important), and it might hurt efforts for safety/alignment quite a lot. The PR repercussions would be enormous; it would paint the whole safety movement as dangerous zealots. It would also make others in the AI industry tend to hate and fear the safety movement, and some of that intense emotion would spread to their attitude toward any individuals with those opinions. Those creating AI would thus become less likely to listen to safety arguments. That would hurt our odds of alignment and survival quite a lot.
To add to this: Just Stop Oil can afford to make people angry (to an extent) and to cause gridlock on the streets of London because the debate about climate change has been going on long enough that most people who might have an influence on fossil-fuel policy have already formed a solid opinion—and even Just Stop Oil probably cannot afford the reputational consequences of killing, e.g., Exxon executives.
As long as most of the possible decision makers have yet to form a solid opinion about whether AI research needs to be banned, we cannot afford the reputational effects of violent actions or even criminal actions. Note that the typical decision maker in, e.g., Washington, D.C., is significantly more disapproving of criminal behavior (particularly, violent criminal behavior) than most of the people you know.
Our best strategy is to do the hard work that enables us to start using state power to neutralize the AI accelerationists. We do not want to do anything hasty that would cause the power of the state (specifically the criminal justice system) to be used to neutralize us.
Making it less cool “to be AI” is an effective intervention, but crime is not a good way to effect that.
I got the distinct impression (from reading comments written by programmers) that Microsoft had a hard time hiring programmers starting in the late 1990s and persisting for a couple of decades because they were perceived by most young people as a destructive force in society. Of course, Microsoft remains a very successful enterprise, but there are a couple of factors that might make the constricting effect of making it uncool to work for an AI lab stronger than the constricting effect of making it uncool to work for Microsoft: first, it take a lot more work (i.e., acquiring skills of knowledge) to start to be able to make a technical contribution to the effort to advance AI than it takes to able to contribute to Microsoft. (Programming was easy to learn for many of the people who turned out to be good hires at Microsoft.) Second, what work is required to get good enough at programming to start to be able to contribute at Microsoft is readily transferable to other jobs that were not (and are not) considered destructive to society. In contrast, if you’ve spent the last 5 to 7 years in a full-time effort to become able to contribute to the AI acceleration effort, there’s really no where else you can use those skills: you have to start over career-wise basically. The hope is that if enough people notice what I just said before putting in that 5 to 7 years of effort, a significant fraction of the most talented ones decide to do something else (because they don’t want to invest that effort, then not be able to reap the career rewards because of their own moral qualms or because AI progress has been banned by the governments).
Some people will disagree with my assertion that people who invested a lot of time getting the technical knowledge needed to contribute to the AI acceleration effort will not be able to use that knowledge anywhere else: they think that they can switch from being capability researchers to being alignment researchers. Or they think they can switch from employment at OpenAI to employment at some other, “good” lab. I think that is probably an illusion. I think about 98% of the people calling themselves alignment researchers are (contrary to their hopes and beliefs) actually contributing to AI acceleration. And I think that all the AI labs are harmful—at least all of them that are spending $billions on training models. I feel bad saying so in what is essentially a footnote to a comment on some other subject, but I’m too tired today to do any better.
But my point is that an effective intervention is to explain that (namely, the destructiveness of AI acceleration and the non-transferability of the technical skills used in AI acceleration) to young people before they put years of work into preparing themselves to do AI engineering or research.
There are probably a lot of reasons, but a big one is that it probably wouldn’t help much to slow down progress toward AGI (no one individual is that important), and it might hurt efforts for safety/alignment quite a lot. The PR repercussions would be enormous; it would paint the whole safety movement as dangerous zealots. It would also make others in the AI industry tend to hate and fear the safety movement, and some of that intense emotion would spread to their attitude toward any individuals with those opinions. Those creating AI would thus become less likely to listen to safety arguments. That would hurt our odds of alignment and survival quite a lot.
To add to this: Just Stop Oil can afford to make people angry (to an extent) and to cause gridlock on the streets of London because the debate about climate change has been going on long enough that most people who might have an influence on fossil-fuel policy have already formed a solid opinion—and even Just Stop Oil probably cannot afford the reputational consequences of killing, e.g., Exxon executives.
As long as most of the possible decision makers have yet to form a solid opinion about whether AI research needs to be banned, we cannot afford the reputational effects of violent actions or even criminal actions. Note that the typical decision maker in, e.g., Washington, D.C., is significantly more disapproving of criminal behavior (particularly, violent criminal behavior) than most of the people you know.
Our best strategy is to do the hard work that enables us to start using state power to neutralize the AI accelerationists. We do not want to do anything hasty that would cause the power of the state (specifically the criminal justice system) to be used to neutralize us.
Making it less cool “to be AI” is an effective intervention, but crime is not a good way to effect that.
I got the distinct impression (from reading comments written by programmers) that Microsoft had a hard time hiring programmers starting in the late 1990s and persisting for a couple of decades because they were perceived by most young people as a destructive force in society. Of course, Microsoft remains a very successful enterprise, but there are a couple of factors that might make the constricting effect of making it uncool to work for an AI lab stronger than the constricting effect of making it uncool to work for Microsoft: first, it take a lot more work (i.e., acquiring skills of knowledge) to start to be able to make a technical contribution to the effort to advance AI than it takes to able to contribute to Microsoft. (Programming was easy to learn for many of the people who turned out to be good hires at Microsoft.) Second, what work is required to get good enough at programming to start to be able to contribute at Microsoft is readily transferable to other jobs that were not (and are not) considered destructive to society. In contrast, if you’ve spent the last 5 to 7 years in a full-time effort to become able to contribute to the AI acceleration effort, there’s really no where else you can use those skills: you have to start over career-wise basically. The hope is that if enough people notice what I just said before putting in that 5 to 7 years of effort, a significant fraction of the most talented ones decide to do something else (because they don’t want to invest that effort, then not be able to reap the career rewards because of their own moral qualms or because AI progress has been banned by the governments).
Some people will disagree with my assertion that people who invested a lot of time getting the technical knowledge needed to contribute to the AI acceleration effort will not be able to use that knowledge anywhere else: they think that they can switch from being capability researchers to being alignment researchers. Or they think they can switch from employment at OpenAI to employment at some other, “good” lab. I think that is probably an illusion. I think about 98% of the people calling themselves alignment researchers are (contrary to their hopes and beliefs) actually contributing to AI acceleration. And I think that all the AI labs are harmful—at least all of them that are spending $billions on training models. I feel bad saying so in what is essentially a footnote to a comment on some other subject, but I’m too tired today to do any better.
But my point is that an effective intervention is to explain that (namely, the destructiveness of AI acceleration and the non-transferability of the technical skills used in AI acceleration) to young people before they put years of work into preparing themselves to do AI engineering or research.