This is an interesting point. But I’m not convinced, at least immediately, that this isn’t likely to be largely a matter of AI governance.
There is a long list of governance strategies that aren’t specific to AI that can help us handle perpetual risk. But there is also a long list of strategies that are. I think that all of the things I mentioned under strategy 2 have AI specific examples:
establishing regulatory agencies, auditing companies, auditing models, creating painful bureaucracy around building risky AI systems, influencing hardware supply chains to slow things down, and avoiding arms races.
And I think that some of the things I mentioned for strategy 3 do too:
giving governments powers to rapidly detect and respond to firms doing risky things with TAI, hitting killswitches involving global finance or the internet, cybersecurity, and generally being more resilient to catastrophes as a global community.
So ultimately, I won’t make claims about whether avoiding perpetual risk is mostly an AI governance problem or mostly a more general governance problem, but certainly there are a bunch of AI specific things in this domain. I also think they might be a bit neglected relative to some of the strategy 1 stuff.
This is an interesting point. But I’m not convinced, at least immediately, that this isn’t likely to be largely a matter of AI governance.
There is a long list of governance strategies that aren’t specific to AI that can help us handle perpetual risk. But there is also a long list of strategies that are. I think that all of the things I mentioned under strategy 2 have AI specific examples:
And I think that some of the things I mentioned for strategy 3 do too:
So ultimately, I won’t make claims about whether avoiding perpetual risk is mostly an AI governance problem or mostly a more general governance problem, but certainly there are a bunch of AI specific things in this domain. I also think they might be a bit neglected relative to some of the strategy 1 stuff.