I’m interested in what people think of are the strongest arguments against this view. Here are a few counterarguments that I’m aware of:
1. Empirically the AI-focused scaling labs seem to care quite a lot about safety, and make credible commitments for safety. If anything, they seem to be “ahead of the curve” compared to larger tech companies or governments.
2. Government/intergovernmental agencies, and to a lesser degree larger companies, are bureaucratic and sclerotic and generally less competent.
3. The AGI safety issues that EAs worry about the most are abstract and speculative, so having a “normal” safety culture isn’t as helpful as buying in into the more abstract arguments, which you might expect to be easier to do for newer companies.
4. Scaling labs share “my” values. So AI doom aside, all else equal, you might still want scaling labs to “win” over democratically elected governments/populist control.
I’m interested in what people think of are the strongest arguments against this view. Here are a few counterarguments that I’m aware of:
1. Empirically the AI-focused scaling labs seem to care quite a lot about safety, and make credible commitments for safety. If anything, they seem to be “ahead of the curve” compared to larger tech companies or governments.
2. Government/intergovernmental agencies, and to a lesser degree larger companies, are bureaucratic and sclerotic and generally less competent.
3. The AGI safety issues that EAs worry about the most are abstract and speculative, so having a “normal” safety culture isn’t as helpful as buying in into the more abstract arguments, which you might expect to be easier to do for newer companies.
4. Scaling labs share “my” values. So AI doom aside, all else equal, you might still want scaling labs to “win” over democratically elected governments/populist control.