There’s a narrative that Chapman and other smart people seem to endorse that goes:
People say a public AI disaster would rally public opinion against AI research and create calls for more serious AI safety. But the COVID pandemic killed several million people and wasted upwards of a year of global GDP. Pandemics are, as a consequence, now officially recognized as a non-threat that should be rigorously ignored. So we should expect the same outcome from AI disasters.
I’m pretty sure I have basically the same opinion and mental models of U.S. government, media, and poltics as Eliezer & David, but even then, this argument seems like it’s trying too hard to be edgy.
Here’s another obvious historical example that I find much more relevant. U.S. anti-nuclear activists said for years that nuclear power wasn’t safe, and nuclear scientists replied over and over that the activists were just non-experts misinformed by TV, and that a meltdown was impossible. Then the Three Mile Island meltdown happened. The consequence of that accident, which didn’t even conclusively kill any particular person, was that anti-nuclear activists got nuclear power regulated in the U.S. to the point where making new plants is completely cost inefficient, as a rule, even in the event of technology advancements.
The difference, of course, between pandemics and nuclear safety breaches, is that pandemics are a natural phenomenon. When people die from diseases, there are only boring institutional failures. In the event of a nuclear explosion, the public, the government, and the media get scapegoats and an industry to blame for the accident. To imply that established punching bags like Google and Facebook would just walk away from causing an international crisis on the scale of the Covid epidemic, strikes me as confusingly naive cynicism from some otherwise very lucid people.
If the media had been able to squarely and emotively pin millions of deaths on some Big Tech AI lab, we would have faced a nearshutdown of AI research and maybe much of venture capital. Regardless of how performative our government’s efforts in responding to the problem were, they would at least succeed at introducing extraordinarily imposing costs and regulations on any new organization that looked to a bureaucractic body like it wanted to make anything similar. The reason such measures were not enforced on U.S. gain-of-function labs following Covid, is because Covid did not come from U.S. gain-of-function labs, and the public is not smart/aware enough to know that they should update towards those being bad.
To be sure, politicians would do a lot of other counterproductive things too. We might still fail. But the long term response to an unprecedented AI catastrophe would be a lot more like the national security establishment response to 9/11 than it would our bungling response to the Coronavirus. There’d be a TSA and a war in the wrong country, but there’d also be a DHS, and a vastly expanded NSA/CIA budget and “prerogative”.
None of this is to say that such an accident is likely to happen. I highly doubt any misaligned AI influential enough to cause a disaster on this scale would not also be in a position to just end us. But I do at least empathize with the people who hope that whatever DeepMind’s cooking, it’ll end up in some bungled state where it only kills 10 million people instead of all of us and we can maybe get a second chance.
Yes, AI research will be substantially curtailed if a lab causes a major disaster
There’s a narrative that Chapman and other smart people seem to endorse that goes:
I’m pretty sure I have basically the same opinion and mental models of U.S. government, media, and poltics as Eliezer & David, but even then, this argument seems like it’s trying too hard to be edgy.
Here’s another obvious historical example that I find much more relevant. U.S. anti-nuclear activists said for years that nuclear power wasn’t safe, and nuclear scientists replied over and over that the activists were just non-experts misinformed by TV, and that a meltdown was impossible. Then the Three Mile Island meltdown happened. The consequence of that accident, which didn’t even conclusively kill any particular person, was that anti-nuclear activists got nuclear power regulated in the U.S. to the point where making new plants is completely cost inefficient, as a rule, even in the event of technology advancements.
The difference, of course, between pandemics and nuclear safety breaches, is that pandemics are a natural phenomenon. When people die from diseases, there are only boring institutional failures. In the event of a nuclear explosion, the public, the government, and the media get scapegoats and an industry to blame for the accident. To imply that established punching bags like Google and Facebook would just walk away from causing an international crisis on the scale of the Covid epidemic, strikes me as confusingly naive cynicism from some otherwise very lucid people.
If the media had been able to squarely and emotively pin millions of deaths on some Big Tech AI lab, we would have faced a near shutdown of AI research and maybe much of venture capital. Regardless of how performative our government’s efforts in responding to the problem were, they would at least succeed at introducing extraordinarily imposing costs and regulations on any new organization that looked to a bureaucractic body like it wanted to make anything similar. The reason such measures were not enforced on U.S. gain-of-function labs following Covid, is because Covid did not come from U.S. gain-of-function labs, and the public is not smart/aware enough to know that they should update towards those being bad.
To be sure, politicians would do a lot of other counterproductive things too. We might still fail. But the long term response to an unprecedented AI catastrophe would be a lot more like the national security establishment response to 9/11 than it would our bungling response to the Coronavirus. There’d be a TSA and a war in the wrong country, but there’d also be a DHS, and a vastly expanded NSA/CIA budget and “prerogative”.
None of this is to say that such an accident is likely to happen. I highly doubt any misaligned AI influential enough to cause a disaster on this scale would not also be in a position to just end us. But I do at least empathize with the people who hope that whatever DeepMind’s cooking, it’ll end up in some bungled state where it only kills 10 million people instead of all of us and we can maybe get a second chance.