There’s a lot here I agree with (which might not be a surprise). Since the example interventions are all/mostly technical research or outreach to technical researchers, I’d add that a bunch of more “governance-flavored” interventions would also potentially contribute.
One of the main things that might keep AI companies from coordinating on safety is that some forms of coordination—especially more ambitious coordination—could violate antitrust law.
One thing that could help would be updating antitrust law or how it’s enforced so that it doesn’t do a terrible job at balancing anticompetition and safety concerns.
Another thing that could help would be a standard-setting organization, since coordination on standards is often more accepted when it’s done in such a context.
[Added] Can standards be helpful for safety before we have reliably safety methods? I think so; until then, we could imagine standards on things like what types of training runs to not run, or when to stop a training run.
If some leading AI lab (say, Google Brain) shows itself to be unreceptive to safety outreach and coordination efforts, and if the lead that more safety-conscious labs have over this lab is insufficient, then government action might be necessary to make sure that safety-conscious efforts have the time they need.
To be more direct, I’m nervous that people will (continue to) overlook a promising class of time-buying interventions (government-related ones) through mostly just having learned about government from information sources (e.g. popular news) that are too coarse and unrepresentative to make promising government interventions salient. Some people respond that getting governments to do useful things is clearly too intractable. But I don’t see how they can justifiably be so confident if they haven’t taken the time to form good models of government. At minimum, the US government seems clearly powerful enough (~$1.5 trillion discretionary annual budget, allied with nearly all developed countries, thousands of nukes, biggest military, experienced global spy network, hosts ODA+, etc.) for its interventions to be worth serious consideration.
Thanks for posting this!
There’s a lot here I agree with (which might not be a surprise). Since the example interventions are all/mostly technical research or outreach to technical researchers, I’d add that a bunch of more “governance-flavored” interventions would also potentially contribute.
One of the main things that might keep AI companies from coordinating on safety is that some forms of coordination—especially more ambitious coordination—could violate antitrust law.
One thing that could help would be updating antitrust law or how it’s enforced so that it doesn’t do a terrible job at balancing anticompetition and safety concerns.
Another thing that could help would be a standard-setting organization, since coordination on standards is often more accepted when it’s done in such a context.
[Added] Can standards be helpful for safety before we have reliably safety methods? I think so; until then, we could imagine standards on things like what types of training runs to not run, or when to stop a training run.
If some leading AI lab (say, Google Brain) shows itself to be unreceptive to safety outreach and coordination efforts, and if the lead that more safety-conscious labs have over this lab is insufficient, then government action might be necessary to make sure that safety-conscious efforts have the time they need.
To be more direct, I’m nervous that people will (continue to) overlook a promising class of time-buying interventions (government-related ones) through mostly just having learned about government from information sources (e.g. popular news) that are too coarse and unrepresentative to make promising government interventions salient. Some people respond that getting governments to do useful things is clearly too intractable. But I don’t see how they can justifiably be so confident if they haven’t taken the time to form good models of government. At minimum, the US government seems clearly powerful enough (~$1.5 trillion discretionary annual budget, allied with nearly all developed countries, thousands of nukes, biggest military, experienced global spy network, hosts ODA+, etc.) for its interventions to be worth serious consideration.