I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children. Most of their projects are either secret or old papers. The only papers which have been produced after 2019 are random irrelevant math papers. Most of the rest of their papers are not even technical in nature and contain a lot of unverified claims. They have not even produced one paper since the breakthrough in LLM technology in 2022. Even among the papers which do indicate risk, there is no consensus among scientific peers that this is true or necessarily an extinction risk. Note: I am not asking for “peer review” as a specific process, just some actual consensus among established researchers to sift mathematical facts from conjecture.
Policymakers should not take seriously the idea of shutting down normal economic activity until this is formally addressed.
just some actual consensus among established researchers to sift mathematical facts from conjecture.
“Scientific consensus” is a much much higher bar than peer review. Almost no topic of relevance has a scientific consensus (for example, there exists basically no trustworthy scientific for urban planning decisions, or the effects of minimum wage law, or pandemic prevention strategies, or cyber security risks, or intelligence enhancement). Many scientific peers think there is an extinction risk.
I think demanding scientific consensus is an unreasonably high bar that would approximately never be met in almost any policy discussion.
Obviously I meant some kind of approximation of consensus or acceptability derived from much greater substantiation. There is no equivalent to Climate Change or ZFC in the field of AI in terms of acceptability and standardisation. Matthew Barnett made my point better in the above comments.
Yes, most policy has no degree of consensus. Most policy is also not asking to shut down the entire world’s major industries. So there must be a high bar. A lot of policy incidentally ends up being malformed and hurting people, so it sounds like you’re just making the case for more “consensus” and not less.
I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children.
The way you’re expressing this feels like an unnecessarily strong bar.
I think advocacy for an AI pause already seems pretty sensible to me if we accept the following premises:
The current AI research paradigm mostly makes progress in capabilities before progress in understanding. (This puts AI progress in a different reference class from most other technological progress, so any arguments with base rates from “technological progress normally doesn’t kill everyone” seem misguided.)
AI could very well kill most of humanity, in the sense that it seems defensible to put this at anywhere from 20-80% (we can disagree on the specifics of that range, but that’s where I’d put it looking at the landscape of experts who seem to be informed and doing careful reasoning (so not LeCun)).
If we can’t find a way to ensure that TAI is developed by researchers and leaders who act with a degree of responsibility proportional to the risks/stakes, it seems better to pause.
Edited to add the following: There’s also a sense in which whether to pause is quite independent from the default risk level. Even if the default risk were only 5%, if there were a solid and robust argument that pausing for five years will reduce it to 4%, that’s clearly very good! (It would be unfortunate for the people who will die preventable deaths in the next five years, but it still helps overall more people to pause under these assumptions.)
The bar is very low for me: If MIRI wants to demand the entire world shut down an entire industry, they must be an active research institution actively producing agreeable papers.
AI is not particularly unique even relative to most technologies. Our work on chemistry in the 1600′s-1900′s far outpaced our level of true understanding of chemistry, to the point where we only had a good model of an atom in the 20th century. And I don’t think anyone will deny the potential dangers of chemistry. Other technologies followed a similar trajectory.
We don’t have to agree that the range is 20-80% at all, never mind the specifics of it. Most polls demonstrate researchers find around 5-10% chance of total extinction on the high end. MIRI’s own survey finds a similar result! 80% would be insanely extreme. Your landscape of experts is, I’m guessing, your own personal follower list and not statistically viable.
I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children. Most of their projects are either secret or old papers. The only papers which have been produced after 2019 are random irrelevant math papers. Most of the rest of their papers are not even technical in nature and contain a lot of unverified claims. They have not even produced one paper since the breakthrough in LLM technology in 2022. Even among the papers which do indicate risk, there is no consensus among scientific peers that this is true or necessarily an extinction risk. Note: I am not asking for “peer review” as a specific process, just some actual consensus among established researchers to sift mathematical facts from conjecture.
Policymakers should not take seriously the idea of shutting down normal economic activity until this is formally addressed.
“Scientific consensus” is a much much higher bar than peer review. Almost no topic of relevance has a scientific consensus (for example, there exists basically no trustworthy scientific for urban planning decisions, or the effects of minimum wage law, or pandemic prevention strategies, or cyber security risks, or intelligence enhancement). Many scientific peers think there is an extinction risk.
I think demanding scientific consensus is an unreasonably high bar that would approximately never be met in almost any policy discussion.
Obviously I meant some kind of approximation of consensus or acceptability derived from much greater substantiation. There is no equivalent to Climate Change or ZFC in the field of AI in terms of acceptability and standardisation. Matthew Barnett made my point better in the above comments.
Yes, most policy has no degree of consensus. Most policy is also not asking to shut down the entire world’s major industries. So there must be a high bar. A lot of policy incidentally ends up being malformed and hurting people, so it sounds like you’re just making the case for more “consensus” and not less.
The way you’re expressing this feels like an unnecessarily strong bar.
I think advocacy for an AI pause already seems pretty sensible to me if we accept the following premises:
The current AI research paradigm mostly makes progress in capabilities before progress in understanding.
(This puts AI progress in a different reference class from most other technological progress, so any arguments with base rates from “technological progress normally doesn’t kill everyone” seem misguided.)
AI could very well kill most of humanity, in the sense that it seems defensible to put this at anywhere from 20-80% (we can disagree on the specifics of that range, but that’s where I’d put it looking at the landscape of experts who seem to be informed and doing careful reasoning (so not LeCun)).
If we can’t find a way to ensure that TAI is developed by researchers and leaders who act with a degree of responsibility proportional to the risks/stakes, it seems better to pause.
Edited to add the following:
There’s also a sense in which whether to pause is quite independent from the default risk level. Even if the default risk were only 5%, if there were a solid and robust argument that pausing for five years will reduce it to 4%, that’s clearly very good! (It would be unfortunate for the people who will die preventable deaths in the next five years, but it still helps overall more people to pause under these assumptions.)
The bar is very low for me: If MIRI wants to demand the entire world shut down an entire industry, they must be an active research institution actively producing agreeable papers.
AI is not particularly unique even relative to most technologies. Our work on chemistry in the 1600′s-1900′s far outpaced our level of true understanding of chemistry, to the point where we only had a good model of an atom in the 20th century. And I don’t think anyone will deny the potential dangers of chemistry. Other technologies followed a similar trajectory.
We don’t have to agree that the range is 20-80% at all, never mind the specifics of it. Most polls demonstrate researchers find around 5-10% chance of total extinction on the high end. MIRI’s own survey finds a similar result! 80% would be insanely extreme. Your landscape of experts is, I’m guessing, your own personal follower list and not statistically viable.