Until this week, all of this was [...] unknown to anyone who could plausibly claim to be a world leader.
I don’t think this is known to be true.
In fact they had no idea this debate existed.
That seems too strong. Some data points:
1. There’s been lots of AI risk press over the last decade. (E.g., Musk and Bostrom in 2014, Gates in 2015, Kissinger in 2018.)
2. Obama had a conversation with WIRED regarding Bostrom’s Superintelligencein 2016, and his administration cited papers by MIRI and FHI in a report on AI the same year. Quoting that report:
General AI (sometimes called Artificial General Intelligence, or AGI) refers to a notional future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today’s Narrow AI from the much more difficult challenge of General AI. Attempts to reach General AI by expanding Narrow AI solutions have made little headway over many decades of research. The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.[14]
People have long speculated on the implications of computers becoming more intelligent than humans. Some predict that a sufficiently intelligent AI could be tasked with developing even better, more intelligent systems, and that these in turn could be used to create systems with yet greater intelligence, and so on, leading in principle to an “intelligence explosion” or “singularity” in which machines quickly race far ahead of humans in intelligence.[15]
In a dystopian vision of this process, these super-intelligent machines would exceed the ability of humanity to understand or control. If computers could exert control over many critical systems, the result could be havoc, with humans no longer in control of their destiny at best and extinct at worst. This scenario has long been the subject of science fiction stories, and recent pronouncements from some influential industry leaders have highlighted these fears.
A more positive view of the future held by many researchers sees instead the development of intelligent systems that work well as helpers, assistants, trainers, and teammates of humans, and are designed to operate safely and ethically.
The NSTC Committee on Technology’s assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy. The policies the Federal Government should adopt in the near-to-medium term if these fears are justified are almost exactly the same policies the Federal Government should adopt if they are not justified. The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks, while investing in research on longer-term capabilities and how their challenges might be managed. Additionally, as research and applications in the field continue to mature, practitioners of AI in government and business should approach advances with appropriate consideration of the long-term societal and ethical questions – in additional to just the technical questions – that such advances portend. Although prudence dictates some attention to the possibility that harmful superintelligence might someday become possible, these concerns should not be the main driver of public policy for AI.
Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.
4. A 2017 JASON report called “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD” said,
To most computer scientists, the claimed “existential threats” posed by AI seem at best uninformed. They do not align with the most rapidly advancing current research directions of AI as a field, but rather spring from dire predictions about one small area of research within AI, Artificial General Intelligence (AGI). AGI seeks to develop machines with “generalized” human intelligence, capable of sustaining long-term goals and intent, or, more generally “perform any intellectual task that a human being can.”[2] Where AI is oriented around specific tasks, AGI seeks general cognitive abilities. On account of this ambitious goal, AGI has high visibility, disproportionate to its size or present level of success. Further, as this report elaborates in subsequent sections, the breakout technologies that have put us in a “golden age” of AI, may impact AGI only modestly. In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI. On this issue, the AI100 Study Panel, a consensus effort by a broad set of prominent AI researchers, recently concluded,
“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers.”[3] (emphasis added)
(This is some evidence that there was awareness of the debate, albeit also relatively direct evidence that important coalitions dismissed AGI risk at the time.)
5. Elon Musk warned a meeting of US governors in 2017 about AI x-risk.
6. This stuff shows up in lots of other places, like the World Economic Forum’s 2017 Global Risks Report: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided”.
7. Matt Yglesias talks a decent amount about AI risk and the rationalists, and is widely followed by people in and around the Biden administration. Ditto Ezra Klein. Among 150 Biden transition officials who had Twitter accounts in 2021, apparently “The most commonly followed political writers and reporters are [Nate] Silver (44.4% follow him), [Ezra] Klein (39.6%), Maggie Haberman (36.8%), Matthew Yglesias (25.7%), and David Frum (25%).”
8. Dominic Cummings has been super plugged into LW ideas like AGI risk for many years, and a 2021 Boris Johnson speech discusses existential risk and quotes Toby Ord.
9. A 2021 United Nations report mentions “existential risk” and “long-termism” by name, and recommends the “regulation of artificial intelligence to ensure that this is aligned with shared global values”.
10. Our community has had important representatives in the Biden administration: Jason Matheny (previously at FHI) left his role running IARPA to spend a year in various senior White House roles, before leaving to run RAND.
Note that in a fair number of cases I think I know the specific individuals who helped bring about (e.g., Stuart Russell), so I more treat this as an update about the ability of people in our network to make stuff like this happen, rather than treating it as an update that there’s necessarily a big pool of people driving this issue who aren’t on our radar.
I do not think lies were told, exactly, but I think the world was deceived. I think the phrasing of the FLI open letter was phrased so as to continue that deception, and that the phrasing was the output of a political calculation.
That seems true to me. There’s definitely been a conscious effort over the years by many EAs and rationalists (including MIRI) to try to not make this a front-and-center political issue.
(Though the “political calculation” FHI made might not be about that, or might not be about that directly; it might be about avoiding alienating ML researchers, and/or other factors.)
Yes, lots of people will disagree with what it says, in various places. People who think their alignment technique will work. People who think AI is further in the future. And a dozen dumber disagreements that I will not mention. But people can’t evaluate those disagreements without having the base model, can’t evaluate the sides of a debate they don’t know is happening.
I don’t think this is known to be true.
That seems too strong. Some data points:
1. There’s been lots of AI risk press over the last decade. (E.g., Musk and Bostrom in 2014, Gates in 2015, Kissinger in 2018.)
2. Obama had a conversation with WIRED regarding Bostrom’s Superintelligence in 2016, and his administration cited papers by MIRI and FHI in a report on AI the same year. Quoting that report:
3. Hillary Clinton wrote in her memoir:
4. A 2017 JASON report called “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD” said,
(This is some evidence that there was awareness of the debate, albeit also relatively direct evidence that important coalitions dismissed AGI risk at the time.)
5. Elon Musk warned a meeting of US governors in 2017 about AI x-risk.
6. This stuff shows up in lots of other places, like the World Economic Forum’s 2017 Global Risks Report: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided”.
7. Matt Yglesias talks a decent amount about AI risk and the rationalists, and is widely followed by people in and around the Biden administration. Ditto Ezra Klein. Among 150 Biden transition officials who had Twitter accounts in 2021, apparently “The most commonly followed political writers and reporters are [Nate] Silver (44.4% follow him), [Ezra] Klein (39.6%), Maggie Haberman (36.8%), Matthew Yglesias (25.7%), and David Frum (25%).”
8. Dominic Cummings has been super plugged into LW ideas like AGI risk for many years, and a 2021 Boris Johnson speech discusses existential risk and quotes Toby Ord.
9. A 2021 United Nations report mentions “existential risk” and “long-termism” by name, and recommends the “regulation of artificial intelligence to ensure that this is aligned with shared global values”.
10. Our community has had important representatives in the Biden administration: Jason Matheny (previously at FHI) left his role running IARPA to spend a year in various senior White House roles, before leaving to run RAND.
Note that in a fair number of cases I think I know the specific individuals who helped bring about (e.g., Stuart Russell), so I more treat this as an update about the ability of people in our network to make stuff like this happen, rather than treating it as an update that there’s necessarily a big pool of people driving this issue who aren’t on our radar.
That seems true to me. There’s definitely been a conscious effort over the years by many EAs and rationalists (including MIRI) to try to not make this a front-and-center political issue.
(Though the “political calculation” FHI made might not be about that, or might not be about that directly; it might be about avoiding alienating ML researchers, and/or other factors.)
I very much agree with this.