I can’t seem to figure out the right keywords to Google, but off the top of my head, some other candidates: banning CFCs (maybe easier? don’t know enough), the taboo against chemical weapons (easier), and nuclear non proliferation (probably easier?)?
I think Anders Sandberg did research on this at one point, and I recall him summarizing his findings as “things are easy to ban as long as nobody really wants to have them”. IIRC, things that went into that category were chemical weapons (they actually not very effective in modern warfare), CFCs (they were relatively straightforward to replace with equally effective alternatives), and human cloning.
This is my impression as well, but it’s very possible that we’re looking at the wrong reference class (IE its plausible that many “sane” things large governments have done are not salient). Maybe some of the big social welfare/early environmental protection programs?
On welfare: Bismarck is famous as a social welfare reformer but these efforts were famously made to undermine socialism and appease the working class, a result any newly-formed volatile state would enjoy. I expect that the effects of social welfare are useful in most countries from the same basis.
On environmentalism today, we see significant European advances in green energy right now, but this is accompanied by large price hikes in natural energy resources, providing quite an incentive. Early large-scale state-driven environmentalism (e.g. Danish wind energy R&D and usage) was driven by the 70s oil crises in the same fashion. And then there’s of course the democratic incentives, i.e. are enough of the population touting environmentalism, then we’ll do it (though 3.5% population-wide active participation seems to work as well).
And that’s just describing state-side shifts. Even revolutions have been driven by non-ideological incentives. E.g. the American revolution started as a staged “throwing tea in the ocean” act by tea smugglers because London reduced the tax on tea for the East India company, reducing their profits (see myths and the article about smugglers’ incentives). Perpetuating a revolution also became a large personal profit for Washington.
I’d guess the very slow rate of nuclear proliferation has been much harder to achieve than banning gain-of-function research would be, since, in the absence of intervention, incentives to get nukes would have been much bigger than incentives to do gain-of-function research.
I agree that nuclear non proliferation is probably harder than a ban on gain-of-function. But in this case, the US + USSR both had a strong incentive to discourage nuclear proliferation, and had enough leverage to coerce smaller states to not work on nuclear weapon development (e.g one or the other were the security provider for the current government of said states).
Ditto with chemical weapons, which seem to have lost battlefield relevance to conflicts between major powers (ie it did not actually break the trench warfare stalemate in WWI even when deployed on a massive scale, and is mainly useful as a weapon of terror against weaker opponents). At this point, the moral arguments + downside risk of chemical attacks vs their own citizens shifted the calculus for major powers. Then the major powers were able to enforce the ban somewhat successfully on smaller countries.
I do think that banning GoF (especially on pathogens that already have or are likely to cause a human pandemic) should be ~ as hard as the chemical weapons case—there’s not much benefit to doing it, and the downside risk is massive. My guess is a generally sane response to COVID is harder, since it required getting many things right, though I think the median country’s response seems much worse than the difficulty of the problem would lead you to believe.
Unfortunately, I think that AGI relevant research has way more utility than many of the military technologies that we’ve failed to ban. Plus, they’re super financially profitable, instead of being expensive to maintain. So the problem for AGI is harder than the problems we’ve really seen solved via international coordination?
nuclear non proliferation [to the extent that it has been achieved] is probably harder than a ban on gain-of-function
that’s sufficient to prove Daniel’s original criticism of the OP—that governments can [probably] fail at something yet succeed at some harder thing.
(And on a tangent, I’d guess a salient warning shot—which the OP was conditioning on—would give the US + China strong incentives to discourage risky AI stuff.)
I can’t seem to figure out the right keywords to Google, but off the top of my head, some other candidates: banning CFCs (maybe easier? don’t know enough), the taboo against chemical weapons (easier), and nuclear non proliferation (probably easier?)?
I think Anders Sandberg did research on this at one point, and I recall him summarizing his findings as “things are easy to ban as long as nobody really wants to have them”. IIRC, things that went into that category were chemical weapons (they actually not very effective in modern warfare), CFCs (they were relatively straightforward to replace with equally effective alternatives), and human cloning.
This is my impression as well, but it’s very possible that we’re looking at the wrong reference class (IE its plausible that many “sane” things large governments have done are not salient). Maybe some of the big social welfare/early environmental protection programs?
On welfare: Bismarck is famous as a social welfare reformer but these efforts were famously made to undermine socialism and appease the working class, a result any newly-formed volatile state would enjoy. I expect that the effects of social welfare are useful in most countries from the same basis.
On environmentalism today, we see significant European advances in green energy right now, but this is accompanied by large price hikes in natural energy resources, providing quite an incentive. Early large-scale state-driven environmentalism (e.g. Danish wind energy R&D and usage) was driven by the 70s oil crises in the same fashion. And then there’s of course the democratic incentives, i.e. are enough of the population touting environmentalism, then we’ll do it (though 3.5% population-wide active participation seems to work as well).
And that’s just describing state-side shifts. Even revolutions have been driven by non-ideological incentives. E.g. the American revolution started as a staged “throwing tea in the ocean” act by tea smugglers because London reduced the tax on tea for the East India company, reducing their profits (see myths and the article about smugglers’ incentives). Perpetuating a revolution also became a large personal profit for Washington.
I’d guess the very slow rate of nuclear proliferation has been much harder to achieve than banning gain-of-function research would be, since, in the absence of intervention, incentives to get nukes would have been much bigger than incentives to do gain-of-function research.
Also, on top of the taboo against chemical weapons, there was the verified destruction of most chemical weapons globally.
I agree that nuclear non proliferation is probably harder than a ban on gain-of-function. But in this case, the US + USSR both had a strong incentive to discourage nuclear proliferation, and had enough leverage to coerce smaller states to not work on nuclear weapon development (e.g one or the other were the security provider for the current government of said states).
Ditto with chemical weapons, which seem to have lost battlefield relevance to conflicts between major powers (ie it did not actually break the trench warfare stalemate in WWI even when deployed on a massive scale, and is mainly useful as a weapon of terror against weaker opponents). At this point, the moral arguments + downside risk of chemical attacks vs their own citizens shifted the calculus for major powers. Then the major powers were able to enforce the ban somewhat successfully on smaller countries.
I do think that banning GoF (especially on pathogens that already have or are likely to cause a human pandemic) should be ~ as hard as the chemical weapons case—there’s not much benefit to doing it, and the downside risk is massive. My guess is a generally sane response to COVID is harder, since it required getting many things right, though I think the median country’s response seems much worse than the difficulty of the problem would lead you to believe.
Unfortunately, I think that AGI relevant research has way more utility than many of the military technologies that we’ve failed to ban. Plus, they’re super financially profitable, instead of being expensive to maintain. So the problem for AGI is harder than the problems we’ve really seen solved via international coordination?
I agree with a lot of that. Still, if
that’s sufficient to prove Daniel’s original criticism of the OP—that governments can [probably] fail at something yet succeed at some harder thing.
(And on a tangent, I’d guess a salient warning shot—which the OP was conditioning on—would give the US + China strong incentives to discourage risky AI stuff.)