A pause would:
2) Increasing the chance of a “fast takeoff” in which one or a handful of AIs rapidly and discontinuously become more capable, concentrating immense power in their hands.
3) Pushing capabilities research underground, and to countries with looser regulations and safety requirements.
Obviously these don’t apply to a permanent, complete shutdown. And they’re not entirely convincing even for a pause.
My point is that the issue is complicated.
A complete shutdown seems impossible to maintain for all of humanity. Someone is going to build AGI. The question is who and how.
The call for more honesty is appreciated. We should be honest, and include “obviously we should just not do it”. But you don’t get many words when speaking publicly, so making those your primary point is a questionable strategy.
This is a good question. It’s worth examining the assumption if it’s the basis of our whole plan.
When I say “we”, I mean “the currently listening audience”, roughly the AI safety community. We don’t have the power to convince humanity to shut down AI research.
There are a few reasons I think this. The primary one is that humanity isn’t a single individual. People have different perspectives. Some will not not be likely to change their minds. There are even some individuals for whom building an AGI would actually be a good idea. Those are people who care more about personal gain than they do about the safety or future of humanity. Sociopaths of one sort or another are thought to make up perhaps 10% of the population (the 1% diagnosed are the ones who get caught). For a sociopath, it’s a good bet to risk the future of humanity against a chance of becoming the most powerful person alive. There are thought to be a lot of sociopaths in government, even in democratic countries.
So, sooner or later, you’re going to see a government or rich individual working on AGI with the vastly improved compute and algorithmic resources that continued advances in hardware and software will bring. The only way to enforce a permanent ban would be to ban computers, or have a global panopticon that monitors what every computer is doing. That might well lead to a repressive global regime that stays in power permanently. That is an S-risk; a scenario in which humanity suffers forever. That’s arguably worse than dying in an attempt to achieve AGI.
Those are weak and loose arguments, but I think that describes the core of my and probably many others’ thinking on the topic.
I am under the impression that, when counting words in public for strategic political reasons, it’s better to be a crazy mogul that shouts extreme takes with confidence, to make your positions clear, even if people already know they can’t take your word to the letter. But I’m not sure I know who’s the strategic target here.
We should shut it all down.
We can’t shut it all down.
The consequences of trying to shut it all down and failing, as we very likely would, could actually raise the odds of human extinction.
Therefore we don’t know what to publicly advocate for.
These are the beliefs I hear expressed by most serious AI safety people. They are consistent and honest.
For instance, see https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire.
That post makes two good points:
A pause would: 2) Increasing the chance of a “fast takeoff” in which one or a handful of AIs rapidly and discontinuously become more capable, concentrating immense power in their hands. 3) Pushing capabilities research underground, and to countries with looser regulations and safety requirements.
Obviously these don’t apply to a permanent, complete shutdown. And they’re not entirely convincing even for a pause.
My point is that the issue is complicated.
A complete shutdown seems impossible to maintain for all of humanity. Someone is going to build AGI. The question is who and how.
The call for more honesty is appreciated. We should be honest, and include “obviously we should just not do it”. But you don’t get many words when speaking publicly, so making those your primary point is a questionable strategy.
Why do you personally think this is correct? Is it that humanity is unknowing of how to shut it down? Or uncapable? Or unwilling?
This is a good question. It’s worth examining the assumption if it’s the basis of our whole plan.
When I say “we”, I mean “the currently listening audience”, roughly the AI safety community. We don’t have the power to convince humanity to shut down AI research.
There are a few reasons I think this. The primary one is that humanity isn’t a single individual. People have different perspectives. Some will not not be likely to change their minds. There are even some individuals for whom building an AGI would actually be a good idea. Those are people who care more about personal gain than they do about the safety or future of humanity. Sociopaths of one sort or another are thought to make up perhaps 10% of the population (the 1% diagnosed are the ones who get caught). For a sociopath, it’s a good bet to risk the future of humanity against a chance of becoming the most powerful person alive. There are thought to be a lot of sociopaths in government, even in democratic countries.
So, sooner or later, you’re going to see a government or rich individual working on AGI with the vastly improved compute and algorithmic resources that continued advances in hardware and software will bring. The only way to enforce a permanent ban would be to ban computers, or have a global panopticon that monitors what every computer is doing. That might well lead to a repressive global regime that stays in power permanently. That is an S-risk; a scenario in which humanity suffers forever. That’s arguably worse than dying in an attempt to achieve AGI.
Those are weak and loose arguments, but I think that describes the core of my and probably many others’ thinking on the topic.
I am under the impression that, when counting words in public for strategic political reasons, it’s better to be a crazy mogul that shouts extreme takes with confidence, to make your positions clear, even if people already know they can’t take your word to the letter. But I’m not sure I know who’s the strategic target here.