People have been writing stories about the dangers of artificial intelligences arguably since Ancient Greek time (Hephaistos built artificial people, including Pandora), certainly since Frankenstein. There are dozens of SF movies on the theme (and in the Hollywood ones, the hero always wins, of course). Artificial intelligence trying to take over the world isn’t a new idea, by scriptwriter standard it’s a tired trope. Getting AI as tightly controlled as nuclear power or genetic engineering would not, politically, be that hard—it might take a decade or two of concerted action , but it’s not impossible. Especially if not-yet-general AI is also taking people’s jobs. The thing is, humans (and especially politicians) mostly worry about problems that could kill them in the next O(5) years. Relatively few people in AI/universities/boardrooms/government/on the streets think we’re O(5) years from GAI, and after more of them have talked to ChatGPT/etc for a while, they’re going to notice the distinctly sub-human-level mistakes it makes, and eventually internalize that a lot of its human-level-appearing abilities are just pattern-extrapolated wisdom of crowds learned from most of the Internet.
So I think the questions are:
Is slowing down progress on GAI actually likely to be helpful, beyond the obvious billions of person-years per year gained from delaying doom? (Personally, I’m having difficulty thinking of a hard technical problem where having more time to solve it doesn’t help.)
If so, when should we slow down progress towards GAI? Too late is disastrous, too soon risks people deciding you’re crying wolf, either when you try it so you fail (and make it harder to slow down later), or else a decade or two after you succeed and progress gets sped up again (as I think is starting to happen with genetic engineering). This depends a lot on how soon you think GAI might happen, and what level of below-general AI would most enhance your doing alignment research on it. (FWIW, my personal feeling is that until recently we didn’t have any AI complex enough for alignment research on it to be interesting/informative, and the likely answer is “just before any treacherous turn is going to happen”—which is a nasty gambling dilemma. I also personally think GAI is still some number of decades away, and the most useful time to go slowly is somewhere around the “smart as a mouse/chimp/just sub-human level”—close enough to human that you’re not having to extrapolate a long way what you learn from doing alignment research on it up to mildly-superhuman levels.)
Whatever you think the answer to 2. is, you need to start the political process a decade or two earlier: social change takes time.
I’m guessing a lot of the reluctance in the AI community is coming from “I’m not the right sort of person to run a political movement”. In which case, go find someone who is, and explain to them that this is an extremely hard technical problem, humanity is doomed if we get it wrong, and we only get one try.
(From a personal point of view, I’m actually more worried about poorly-aligned AI than non-aligned AI. Everyone beng dead and having the solar system converted into paperclips would suck, but at least it’s probably fairly quick. Partially aligned AI that keeps us around but doesn’t understand how to treat us could make Orwell’s old quote about a boot stamping on a human face forever look mild – and yes, I’m on the edge of Godwin’s Law.)
People have been writing stories about the dangers of artificial intelligences arguably since Ancient Greek time (Hephaistos built artificial people, including Pandora), certainly since Frankenstein. There are dozens of SF movies on the theme (and in the Hollywood ones, the hero always wins, of course). Artificial intelligence trying to take over the world isn’t a new idea, by scriptwriter standard it’s a tired trope. Getting AI as tightly controlled as nuclear power or genetic engineering would not, politically, be that hard—it might take a decade or two of concerted action , but it’s not impossible. Especially if not-yet-general AI is also taking people’s jobs. The thing is, humans (and especially politicians) mostly worry about problems that could kill them in the next O(5) years. Relatively few people in AI/universities/boardrooms/government/on the streets think we’re O(5) years from GAI, and after more of them have talked to ChatGPT/etc for a while, they’re going to notice the distinctly sub-human-level mistakes it makes, and eventually internalize that a lot of its human-level-appearing abilities are just pattern-extrapolated wisdom of crowds learned from most of the Internet.
So I think the questions are:
Is slowing down progress on GAI actually likely to be helpful, beyond the obvious billions of person-years per year gained from delaying doom? (Personally, I’m having difficulty thinking of a hard technical problem where having more time to solve it doesn’t help.)
If so, when should we slow down progress towards GAI? Too late is disastrous, too soon risks people deciding you’re crying wolf, either when you try it so you fail (and make it harder to slow down later), or else a decade or two after you succeed and progress gets sped up again (as I think is starting to happen with genetic engineering). This depends a lot on how soon you think GAI might happen, and what level of below-general AI would most enhance your doing alignment research on it. (FWIW, my personal feeling is that until recently we didn’t have any AI complex enough for alignment research on it to be interesting/informative, and the likely answer is “just before any treacherous turn is going to happen”—which is a nasty gambling dilemma. I also personally think GAI is still some number of decades away, and the most useful time to go slowly is somewhere around the “smart as a mouse/chimp/just sub-human level”—close enough to human that you’re not having to extrapolate a long way what you learn from doing alignment research on it up to mildly-superhuman levels.)
Whatever you think the answer to 2. is, you need to start the political process a decade or two earlier: social change takes time.
I’m guessing a lot of the reluctance in the AI community is coming from “I’m not the right sort of person to run a political movement”. In which case, go find someone who is, and explain to them that this is an extremely hard technical problem, humanity is doomed if we get it wrong, and we only get one try.
(From a personal point of view, I’m actually more worried about poorly-aligned AI than non-aligned AI. Everyone beng dead and having the solar system converted into paperclips would suck, but at least it’s probably fairly quick. Partially aligned AI that keeps us around but doesn’t understand how to treat us could make Orwell’s old quote about a boot stamping on a human face forever look mild – and yes, I’m on the edge of Godwin’s Law.)