Yes, AI research will be substantially curtailed if a lab causes a major disaster
There’s a narrative that Chapman and other smart people seem to endorse that goes:
People say a public AI disaster would rally public opinion against AI research and create calls for more serious AI safety. But the COVID pandemic killed several million people and wasted upwards of a year of global GDP. Pandemics are, as a consequence, now officially recognized as a non-threat that should be rigorously ignored. So we should expect the same outcome from AI disasters.
I’m pretty sure I have basically the same opinion and mental models of U.S. government, media, and poltics as Eliezer & David, but even then, this argument seems like it’s trying too hard to be edgy.
Here’s another obvious historical example that I find much more relevant. U.S. anti-nuclear activists said for years that nuclear power wasn’t safe, and nuclear scientists replied over and over that the activists were just non-experts misinformed by TV, and that a meltdown was impossible. Then the Three Mile Island meltdown happened. The consequence of that accident, which didn’t even conclusively kill any particular person, was that anti-nuclear activists got nuclear power regulated in the U.S. to the point where making new plants is completely cost inefficient, as a rule, even in the event of technology advancements.
The difference, of course, between pandemics and nuclear safety breaches, is that pandemics are a natural phenomenon. When people die from diseases, there are only boring institutional failures. In the event of a nuclear explosion, the public, the government, and the media get scapegoats and an industry to blame for the accident. To imply that established punching bags like Google and Facebook would just walk away from causing an international crisis on the scale of the Covid epidemic, strikes me as confusingly naive cynicism from some otherwise very lucid people.
If the media had been able to squarely and emotively pin millions of deaths on some Big Tech AI lab, we would have faced a near shutdown of AI research and maybe much of venture capital. Regardless of how performative our government’s efforts in responding to the problem were, they would at least succeed at introducing extraordinarily imposing costs and regulations on any new organization that looked to a bureaucractic body like it wanted to make anything similar. The reason such measures were not enforced on U.S. gain-of-function labs following Covid, is because Covid did not come from U.S. gain-of-function labs, and the public is not smart/aware enough to know that they should update towards those being bad.
To be sure, politicians would do a lot of other counterproductive things too. We might still fail. But the long term response to an unprecedented AI catastrophe would be a lot more like the national security establishment response to 9/11 than it would our bungling response to the Coronavirus. There’d be a TSA and a war in the wrong country, but there’d also be a DHS, and a vastly expanded NSA/CIA budget and “prerogative”.
None of this is to say that such an accident is likely to happen. I highly doubt any misaligned AI influential enough to cause a disaster on this scale would not also be in a position to just end us. But I do at least empathize with the people who hope that whatever DeepMind’s cooking, it’ll end up in some bungled state where it only kills 10 million people instead of all of us and we can maybe get a second chance.
- Responding to ‘Beyond Hyperanthropomorphism’ by 14 Sep 2022 20:37 UTC; 8 points) (
- 18 Jul 2022 19:14 UTC; 2 points) 's comment on AI Hiroshima (Does A Vivid Example Of Destruction Forestall Apocalypse?) by (
I think the real answer is that we don’t know what would happen, and there are a variety of possibilities. It’s entirely plausible that a “warning shot” would lengthen timelines on net, and entirely plausible that it would shorten timelines on net.
I’m more confident in saying that I don’t think a “warning shot” will suddenly move civilization from ‘massively failing at the AGI alignment problem’ to ‘handling the thing pretty reasonably’. If a warning shot shifts us from a failure trajectory to a success trajectory, I expect that to be because we were already very close to a success trajectory at a time.
I agree with that statement. I don’t expect our civilization to handle anything as hard and tail-ended as the alignment problem reasonably even if it tries.
FWIW I object to the title “Yes, AI research will be substantially curtailed if a lab causes a major disaster”; seems too confident.
A thing I wrote on FB a few months ago, in response to someone asking if a warning shot might happen:
Someone then asked why I thought a warning shot might make things worse, and I said:
There are plenty of scenarios that I think make the world go a lot better, but I don’t think warning shots are one of them.
I don’t think warning shots are random, but if they have a large impact it may be in unexpected directions, perhaps for butterfly-effect-y reasons.
I’m not defining warning shots that way; I’d be much more surprised to see an event like that happen (because it’s more conjunctive), and I’d be much more confident that a warning shot like that won’t shift us from a very-bad trajectory to an OK one (because I’d expect an event like that to come very shortly before AGI destroys or saves the world, if an event like that happened at all).
When I say ‘warning shot’ I just mean ‘an event where AI is perceived to have caused a very large amount of destruction’.
The warning shots I’d expect to do the most good are ones like:
’20 or 30 or 40 years before we’d naturally reach AGI, a huge narrow-AI disaster unrelated to AGI risk occurs. This disaster is purely accidental (not terrorism or whatever). Its effect is mainly just to cause it to be in the Overton window that a wider variety of serious technical people can talk about scary AI outcomes at all, and maybe it slows timelines by five years or whatever. Also, somehow none of this causes discourse to become even dumber; e.g., people don’t start dismissing AGI risk because “the real risk is narrow AI symptoms like the one we just saw”, and there isn’t a big ML backlash to regulatory/safety efforts, and so on.′
I don’t expect anything at all like that to happen, not least because I suspect we may not have 20+ years left before AGI. But that’s a scenario where I could imagine real, modest improvements. Maybe. Optimistically.
I don’t know what you mean by “worth it”. I’m not planning to make a warning shot happen, and would strongly advise that others not do so either. :p
A very late-stage warning shot might help a little, but the whole scenario seems unimportant and ‘not where the action is’ to me. The action is in slow, unsexy earlier work to figure out how to actually align AGI systems (or failing that, how to achieve some non-AGI world-saving technology). Fantasizing about super-unlikely scenarios that wouldn’t even help strikes me as a distraction from figuring out how to make alignment progress today.
I’m much more excited by scenarios like: ‘a new podcast comes out that has top-tier-excellent discussion of AI alignment stuff, it becomes super popular among ML researchers, and the culture norms, and expectations of ML thereby shift such that water-cooler conversations about AGI catastrophe are more serious, substantive, informed, candid, and frequent’.
It’s rare for a big positive cultural shift like that to happen; but it does happen sometimes, and it can result in very fast changes to the Overton window. And since it’s a podcast containing many hours of content, there’s the potential to seed subsequent conversations with a lot of high-quality background thoughts.
By comparison, I’m less excited about individual researchers who explicitly say the words ‘I’ll only work on AGI risk after a catastrophe has happened’. This is a really foolish view to consciously hold, and makes me a lot less optimistic about the relevance and quality of research that will end up produced in the unlikely event that (a) a catastrophe actually happens, and (b) they actually follow through and drop everything they’re working on to do alignment.
Answering independently, I’d like to point out a few features of something like governance appearing as a result of the warning shot.
If a wave of new funding appears, it will be provided via grants according to the kind of criteria that make sense to Congress, which means AI Safety research will probably be in a similar position to cancer research since the War on Cancer was launched. This bodes poorly for our concerns.
If a set of regulations appear, they will ban or require things according to criteria that make sense to Congress. This looks to me like it stands a substantial chance of making several winning strategies actually illegal by accident, as well as accidentally emphasizing the most dangerous directions.
In general, once something has laws about it people stop reasoning about it morally, and default to the case of legal → good. I expect this to completely deactivate a majority of ML researchers with respect to alignment; it will simply be one more bureaucratic procedure for getting funding.
Good catch on the natural-vs-man-made accidental bait-and-switch in the common argument. This post changed my mind to think that, at least for scaling-heavy AI (and, uh, any disaster that leaves the government standing), regulation could totally help the overall situation.
Well, there’s a significant probability COVID isn’t a “natural” pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.
If there’s an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it’s a good bet that however interested “the powers that be” are in GOF, they’re probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.
Nuclear regulation also might not be a good example of a disaster forcing meaningful regulation because the real pressure was against military use of nuclear power and that seems to have enjoyed general immunity against real regulation. So it’s more like if an AI incident results in the general public being banned from buying GPUs or something while myriad AI labs still churn toward AGI.
My main thesis regarding how a non-existential AI disaster would happen in practice is (and I don’t think this would happen), Google or Facebook or some other large tech company publicly releases an agent that’s intelligent enough to be middling at wargames but not enough to do things like creative ML research, and people put it in one or more of IOT devices/critical infrastructure/ military equipment. Surprise: it has a bad value function and/or edge case behavior, and a group of agents end up deliberately and publicly defecting and successfully killing large numbers of people.
In this scenario, it would be extremely obvious that the party responsible for marketing and selling the AI was FaceGoog, and no matter what the Powers That Be wanted, the grieving would be directing their anger towards those engineers. Politicians wouldn’t individually give much of a shit about the well being of The Machine and instead be racing to see who could make the most visible condemnations of Big Tech and argue over which party predicted this would happen all along. Journalists would do what they always do and spin the story according to their individual political ideologies and not according to some institutional incentives, which would be more about painting their political opponents as Big Tech supporters than instrumentally supporting the engineers. Whatever company was responsible for the problem would, at a minimum, shutter all AI research. Congress would pass some laws written by their lobbyist consultants, of whom who knows, maybe even one or two could even be said to be “alignment people”, and there’s a new body of oversight analogous to the FDA for biotech companies.
And I appreciate the viewpoint that this is either just one timeline, or relies on premises that might be untrue, but in my head at least it just seems like it falls into place without making many critical assumptions.
Generally. I endorse the comparison of AI with nuclear weapons (especially because AI is currently being mounted on literal nuclear weapons).
But in this case, there’s a really big distinction that should be made between mass-media and specialized institutions. Intelligence/military agencies, specialized Wall-street analyst firms, and bureaucracy leadership all probably know things like exactly how frequently Covid causes brain damage and have the best forecasters predicting the next outbreak. For them, it’s less about spinning stories, and more about figuring out what type of professional employees tend to write accurate/predictive reports and forecasts. Spun stories are certainly more influential then they were 10 years ago, and vastly more influential than they appear to the uninitiated, but I don’t know if we’ve gotten to the point where they can fool the professionals at not getting fooled.
Arms control has happened in the past even though it was difficult to verify, and nuclear weapons were centralized by default so it’s hard to know anything about how hard it is to centralize that sort of thing.
With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.
Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks in large part to a spun story. Maybe the forecasters at the above places know better but none of the entities who might act on that information are necessarily incentivized to push for regulation as a result. So it’s not clear it would matter if most forecasters know AI is probably responsible for some murky disaster while the public believes humans are responsible.
This argument is important because it is related to a critical assumption in AGI x-risk, specifically with regard to the effectiveness of regulation.
If an AGI can be created by any person, in their living room, with a 10 year old laptop, then regulation is going to struggle to make a difference. Case in point: strong encryption was made illegal (and still is) in various places, and yet, teenagers use Signal and the internet runs on HTTPS.
If, on the other hand, true agent-like AGI turns out to be computationally expensive and requires very specialized hardware to run efficiently, such that only very large corporations can foot the bill to do so (e.g. designing and building increasingly custom hardware like Google TPUs, Nvidia JETSON, Cerebras Wafer, Microsoft / Graphcore IPU, Mythic AMP), then regulation is going to be surprisingly effective, in the same way that it has become stupidly difficult to build new nuclear reactors in the United States, despite advances in safety / efficiency / etc.
I think the natural/manmade comparison between COVID and Three Mile has alot of merit but there are other differences which might explain the difference. Some of them would imply that there would be a strong response to an AI , others less so.
Local vs global
To prevent nuclear meltdowns you only need to ban them in the US—it doesn’t matter what you do elsewhere. This is more complicated for pandemic preparedness.
Active spending vs loss of growth
Its easier to pass a law putting in nuclear regulations which limit growth as this isn’t as obvious a loss as spending money from the public purse on measures for pandemics.
Activity of lobbying groups
I get the impression that the anti-nuclear lobby was alot bigger than any pro-pandemic-preparedness lobby. Possibly this is partly caused by the natural vs manmade thing so might be kind of a subpoint.
Tractability of problem
Preventing nuclear disasters seems more tractable than pandemic preparedness
1979 vs 2020
Were our institutions stronger back then?
FWIW I agree that a large AI disaster would cause some strong regulation and international agreements, my concern is more that a small one would not and small ones from weaker AIs seem more likely to happen.
Yeah, this is a better explanation than my post has. There were definitely multiple factors.
One aspect of tractability of these sorts of coordination problems that makes it different from the tractability of problems in everyday life: I don’t think people largely “expect” their government to solve pandemic preparedness. It seems like something that can’t be solved, to the average voter. Whereas there’s pretty much a “zero-tolerance policy” (?) on nuclear meltdowns because that seems to most people like something that should never happen. So it’s not necessarily about the problem being solvable in a traditional sense, more about the tendency of the public to blame their government officials when things go wrong.
I predict the instinct of the public if “something goes wrong” with AGI will be to say “this should never happen, the government needs to Do Something”, which in practice will mean blaming the companies involved and completely hampering their ability to publish or complete relevant research.
Nice post—seems reasonable.
Minor suggestion to revise the title to something like “Yes, AI research will be substantially curtailed if a major disaster from AI happens”. Before I read your post, I was pretty sure it was going to be about generic disasters, arguing that e.g. a major climate disaster or nuclear disaster would slow down AI research.
Updated the title. I changed it a couple times but I didn’t want it to have too many words.
I think if a major disaster was caused by some unique, sci-fi kind of technological development (not nuclear, that’s already established) that would also lead to small scale increase in concern about AI risk as well, but I’m not sure about that.
Interesting post, and I generally agree.
One note—you appear to be quoting David Chapman, not Yudkowsky. The Twitter post you linked to was written by Chapman. It’s also not exactly what the tweet says. Can you maybe update to reflect that it’s a Chapman quote, or directly link to where Yudkowsky said this? Apologies if I’m missing something obvious in the link.
Eliezer retweeted that earlier today. He’s also said similar things in the past. I will update the content though so the link text is “Chapman”.
Covid may well have been caused by some lab. Partly there is a big difference between AI causes X and it quickly becoming common knowledge that AI caused X.
Suppose some disaster. To be specific, lets say a nuclear warhead detonates in a city. Some experts blame an AI hacking the control systems, other experts disagree. Some people say this proof AI is dangerous. Other people say its proof that badly secured nukes are dangerous. Some people say humans hackers took control of the nukes. Others say terrorists stole the nukes. Others say it was an accident. The official position is to blame an asteroid impact that hit a truck carrying nuclear waste. A year later, there is a fairly convincing case it was probably an AI, but some experts still disagree. The public is unsure. The disaster itself is no longer news. No one has a clue which AI.
An AI “warning shot” plays an important role in my finalist entry to the FLI’s $100K AI worldbuilding contest; but civilization only has a good response to the crisis because my story posits that other mechanisms (like wide adoption of “futarchy”-inspired governance) had already raised the ambient wisdom & competence level of civilization.
I think a warning shot in the real world would probably push out timelines a bit by squashing the most advanced projects, but then eventually more projects would come along (perhaps in other countries, or in secret) and do AGI anyways, so I’d be worried that we’d get longer “timelines” but a lower actual chance of getting aligned AI. For a warning shot to really be net-positive for humanity, it would need to achieve a very strong response, such as the international suppression of all AI research (not just cumbersome regulation on a few tech companies) with a ferocity that meets or exceeds how we currently handle the threat of nuclear proliferation.
Deaths being from natural phenomena seem to just be one factor determining how strong our emotional response to disasters is, and there are plenty of others. People seem to give a greater emotional response if the deaths are flashy, unexpected, instant rather than slow (both in regards to each individual and the length of the disaster), could happen to anyone at any time, and are inversely correlated with age (people care much less if old people die and much more if children do). This would explain why 9/11, school shootings, or shark attacks have a much greater emotional response than covid or the classic comparison of 9/11 to the flu. It would also help if the disaster was international. So a lot probably depends on the circumstances of the AI disaster.
A new and unfamiliar disaster could also come with fewer preconceptions that the size of the threat is bounded above by previous instances and that we can deal with it with known tools like medicine and vaccines in the case of pandemics. On the other hand, it could have the effect of making people set an upper bound on the possible size of future AI disasters.
It also seems to me like it would be a lot more actionable, easier, and less costly to regulate AI research than to put effective measures in place to prevent future pandemics, so the reluctance should be less.
How could that work? What does it look like? How can you in practice e.g. ban all GPU clusters? You’d be wiping out a ton of non-AI stuff. If you don’t, then AI stuff can just be made to look like non-AI stuff. Just banning “stuff like the stuff that caused That One Big Accident” seems like it doesn’t do that much to slow AGI research.
On the weak end, it looks like all the hoops that biotech companies have to jump through to get approval from the FDA and crew, except as applied to AI and ML companies. On the strong end, it looks like the hoops that, say, a new nuclear fusion startup would have to jump through.
Correct. You may have a lingering intuition congress would refuse to do this because it would prevent so much economic “growth”, but they did the same thing, effectively, with nuclear power.
In practice it may not, but I expect it would extend timelines a little, depending on how much time we actually had between the incident and the really major “accidents”.
Hm.… So we’re not talking about banning GPUs, we’re talking about banning certain kinds of organizations. Like, DeepMind isn’t allowed to advertise as an AI research place, isn’t allowed to publish results, and so on; and they have to have a bunch of operational security and buy-in from employees and lie to their governments, or else relocate to somewhere with less restrictive regulations; and investors and clients maybe have to do shenanigans. Is the commitment to the ban strong enough to lead to military invasions to enforce the ban globally? Relocating to a less Western country is enough of a cost to slow down research a little, maybe, yeah. There’s still nuclear power plants in non-US places, and my impression is that there’s biotech research that’s pretty sketchy by US / Western standards going on in other places (e.g. Wuhan?).
Correct, and a bunch of those things you listed even push them towards operational adequacy instead of being just delaying tactics. I’d be pedantic and say DeepMind is probably the faction that causes the disaster in this tail-end scenario and is thus completely dismantled, but that’s not really getting at the point.
Not necessarily, and that would depend on the particular severity of the event. If AI killed a million plus young people I think it’s not implausible.
If all of the relevant researchers are citizens of or present in the U.S. and U.K. however, and thus subject to U.S. and U.K. law, and there’s no other country with strong enough network effects, then it can still have a tremendous, outsized effect on research progress. Note that the FDA seems to degrade the ability of the global medical establishment to accomplish groundbreaking research without having some sort of global pseudo-jurisdiction, just by preventing it from happening in the U.S. and the downstream effects of that for developing nations. People have tried going to e.g. the Philippines and doing good nuclear power work there (link pending). Unfortunately the “go to ${X} and do ${Y} there if it’s illegal in ${Z}” strategy rarely tends to be workable in practice for goods more complicated than narcotics; you lose all of that nice Google funding, for one.
Like what? It seems qualitatively apparent to me that there is less going on in biotech than in IT, because the country that does most of the world’s innovation has outlawed it. When China’s researchers get caught doing sketchy stuff like CRISPR, the global medical establishment applies some light pressure and they go to jail. We would outlaw AI research like they have effectively outlawed gene editing. There would be a bunch of second order effects on the broader IT industry but we would still, kind of, accomplish the primary goal.
(I’m not sure about this, thinking aloud; you may be right.)
AI is hard to regulate because
It’s hard to understand what it is, hence hard to point at it, hence hard to enforce bans. For nuclear stuff, you need lumps of stuff dug out of mines that can be detected by waving a little device over it. For bio, you have to have, like, big expensive machines? If you’re not just banning GPUs, what are you banning? Banning certain kinds of organizations is banning branding, and it doesn’t seem that hard to do AGI research with different branding that still works for recruitment. (This is me a little bit changing my mind; I think I agree that a ban could cause a temporary slowdown by breaking up conspicuous AGI research orgs, like DM or whatnot, but I think it’s not that much of a slowdown.) How could you ban compute? Could you ban having large clusters? What about networked piece-meal compute? How much slower would the latter be?
It looks like the next big superweapon. Nuclear plants are regulated, but before that, and after we knew what nuclear weapons meant, there was an arms race and thousands of nukes made. This hasn’t as much happened for biotech? The ban on chemical / bio weapons basically worked?
Its inputs are ubiquitous. You can’t order a gene synthesis machine for a couple hundred bucks with <week shipping, you can’t order a pile of uranium, but you can order GPUs, on your own, as much as you want. Compute is fungible, easy to store, cheap, safe (until it’s not), robust, and has a thriving multifarious economy supporting its production and R&D.
It’s highly shareable. You can’t stop the signal, so you can’t stop source code, tools, and ideas from being shared. (Which is a good thing, except for AGI...) And there’s a fairly strong culture of sharing in AI.
It’s highly scalable. Source code can be copied and run wherever, whenever, by whoever, and to some lesser extent also ideas. Costly inputs more temper the scalability of nuclear and bio stuff.
Prerequisite knowledge is privately, individually accessible. It’s easy to, on your own without anyone knowing, get a laptop and start learning to program, learning to program AI, and learning to experiment with AI. If you’re super talented, people might pay you to do this! I would guess that this is a lot less true with nuclear and bio stuff?
There’s lots of easily externally checkable benchmarks and test applications to notice progress.
I think the only way governments could have any hope of preventing AI progress is by actually physically taking control of the factories all over the world that produce GPUs and making them require a license to use or else just discontinuing their manufacture altogether. And that just means people will try to invent other computing substrates to use for it besides GPUs. And I don’t think it’s very plausible even this will happen. Probably, as other commenters have said, the reaction would be bungled and only make matters worse.
Do you have any particular reason to believe this?
I can say with confidence that there is at least one reason: it didn’t come from the United States 😜
I’ve never seen that argument you’re responding to before. Admittedly, I’m probably only thinking this in hindsight, but it seems like there are a ton of counterarguments, in addition to what you’ve presented. There isn’t a large opposition to OSHA or the USDA.
That being said, I don’t agree with the cause being about natural vs anthropogenic problems. I think the difference might be how much of an impact the decisions have on most people (rather than just companies). There’s no way I can think of to prove either is correct, and there’s certainly more than one factor involved, so a combination of the two is possible. My intuition is that the impact on the general population is a more important distinction.