I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children.
The way you’re expressing this feels like an unnecessarily strong bar.
I think advocacy for an AI pause already seems pretty sensible to me if we accept the following premises:
The current AI research paradigm mostly makes progress in capabilities before progress in understanding. (This puts AI progress in a different reference class from most other technological progress, so any arguments with base rates from “technological progress normally doesn’t kill everyone” seem misguided.)
AI could very well kill most of humanity, in the sense that it seems defensible to put this at anywhere from 20-80% (we can disagree on the specifics of that range, but that’s where I’d put it looking at the landscape of experts who seem to be informed and doing careful reasoning (so not LeCun)).
If we can’t find a way to ensure that TAI is developed by researchers and leaders who act with a degree of responsibility proportional to the risks/stakes, it seems better to pause.
Edited to add the following: There’s also a sense in which whether to pause is quite independent from the default risk level. Even if the default risk were only 5%, if there were a solid and robust argument that pausing for five years will reduce it to 4%, that’s clearly very good! (It would be unfortunate for the people who will die preventable deaths in the next five years, but it still helps overall more people to pause under these assumptions.)
The bar is very low for me: If MIRI wants to demand the entire world shut down an entire industry, they must be an active research institution actively producing agreeable papers.
AI is not particularly unique even relative to most technologies. Our work on chemistry in the 1600′s-1900′s far outpaced our level of true understanding of chemistry, to the point where we only had a good model of an atom in the 20th century. And I don’t think anyone will deny the potential dangers of chemistry. Other technologies followed a similar trajectory.
We don’t have to agree that the range is 20-80% at all, never mind the specifics of it. Most polls demonstrate researchers find around 5-10% chance of total extinction on the high end. MIRI’s own survey finds a similar result! 80% would be insanely extreme. Your landscape of experts is, I’m guessing, your own personal follower list and not statistically viable.
The way you’re expressing this feels like an unnecessarily strong bar.
I think advocacy for an AI pause already seems pretty sensible to me if we accept the following premises:
The current AI research paradigm mostly makes progress in capabilities before progress in understanding.
(This puts AI progress in a different reference class from most other technological progress, so any arguments with base rates from “technological progress normally doesn’t kill everyone” seem misguided.)
AI could very well kill most of humanity, in the sense that it seems defensible to put this at anywhere from 20-80% (we can disagree on the specifics of that range, but that’s where I’d put it looking at the landscape of experts who seem to be informed and doing careful reasoning (so not LeCun)).
If we can’t find a way to ensure that TAI is developed by researchers and leaders who act with a degree of responsibility proportional to the risks/stakes, it seems better to pause.
Edited to add the following:
There’s also a sense in which whether to pause is quite independent from the default risk level. Even if the default risk were only 5%, if there were a solid and robust argument that pausing for five years will reduce it to 4%, that’s clearly very good! (It would be unfortunate for the people who will die preventable deaths in the next five years, but it still helps overall more people to pause under these assumptions.)
The bar is very low for me: If MIRI wants to demand the entire world shut down an entire industry, they must be an active research institution actively producing agreeable papers.
AI is not particularly unique even relative to most technologies. Our work on chemistry in the 1600′s-1900′s far outpaced our level of true understanding of chemistry, to the point where we only had a good model of an atom in the 20th century. And I don’t think anyone will deny the potential dangers of chemistry. Other technologies followed a similar trajectory.
We don’t have to agree that the range is 20-80% at all, never mind the specifics of it. Most polls demonstrate researchers find around 5-10% chance of total extinction on the high end. MIRI’s own survey finds a similar result! 80% would be insanely extreme. Your landscape of experts is, I’m guessing, your own personal follower list and not statistically viable.