We have discussed two forms of misuse: individuals or small groups using AIs to cause a disaster, and governments or corporations using AIs to entrench their influence. To avoid either of these risks being realized, we will need to strike a balance in terms of the distribution of access to AIs and governments’ surveillance powers. We will now discuss some measures that could contribute to finding that balance.
None of the following suggestions seem to concern themselves with limiting government’s surveillance powers, so this sentence about “finding a balance” seems rather misleading. You have 4 classes of “suggestions,” which all concern individual misuse or corporate misuse. Perhaps you left some out by accident?
Furthermore, some of your suggestions would almost certainly increase corporate influence / government power. For instance:
One way to mitigate this risk is through structured access, where AI providers limit users’ access to dangerous system capabilities by only allowing controlled interactions with those systems through cloud services [27] and conducting know-your-customer screenings before providing access [28].… Lastly, AI developers should be required to show that their AIs pose minimal risk of serious harm prior to open sourcing them.
If pursued fully, this measure would cripple or render impossible open-source AI of any capability at all, rather as requiring a car company / gun company / knife company to show that its products cannot be misused would (obviously) render impossible a car company’s / gun company’s / knife company’s services. Making open source AI impossible in turn increases corporate power because they’ll be the only ones permitted to sell AI, which in turn increases centralization. And because corporations are easier to keep a hand on and surveil, this in turn increases government power.
Maybe you think that’s worth it for other reasons! But that’s a trade-off that should be addressed explicitly.
Government’s using AI to maximize their power is a bad consequence that can happen even without great breakthroughs in AI, or speculative AI recursive-self improvement. It’s worth thinking about and making plans to avoid. I’m really happy that you highlight it as a potential problem, but I think the measures you propose don’t actually help avoid it in any way—they all tilt in the direction of greater centralization.
None of the following suggestions seem to concern themselves with limiting government’s surveillance powers, so this sentence about “finding a balance” seems rather misleading. You have 4 classes of “suggestions,” which all concern individual misuse or corporate misuse. Perhaps you left some out by accident?
Furthermore, some of your suggestions would almost certainly increase corporate influence / government power. For instance:
If pursued fully, this measure would cripple or render impossible open-source AI of any capability at all, rather as requiring a car company / gun company / knife company to show that its products cannot be misused would (obviously) render impossible a car company’s / gun company’s / knife company’s services. Making open source AI impossible in turn increases corporate power because they’ll be the only ones permitted to sell AI, which in turn increases centralization. And because corporations are easier to keep a hand on and surveil, this in turn increases government power.
Maybe you think that’s worth it for other reasons! But that’s a trade-off that should be addressed explicitly.
Government’s using AI to maximize their power is a bad consequence that can happen even without great breakthroughs in AI, or speculative AI recursive-self improvement. It’s worth thinking about and making plans to avoid. I’m really happy that you highlight it as a potential problem, but I think the measures you propose don’t actually help avoid it in any way—they all tilt in the direction of greater centralization.