Reasons that scaling labs might be motivated to sign onto AI safety standards:
Companies who are wary of being sued for unsafe deployment that causes harm might want to be able to prove that they credibly did their best to prevent harm.
Big tech companies like Google might not want to risk premature deployment, but might feel forced to if smaller companies with less to lose undercut their “search” market. Standards that prevent unsafe deployment fix this.
However, AI companies that don’t believe in AGI x-risk might tolerate higher x-risk than ideal safety standards by the lights of this community. Also, I think insurance contracts are unlikely to appropriately account for x-risk, if the market is anything to go by.
Reasons that scaling labs might be motivated to sign onto AI safety standards:
Companies who are wary of being sued for unsafe deployment that causes harm might want to be able to prove that they credibly did their best to prevent harm.
Big tech companies like Google might not want to risk premature deployment, but might feel forced to if smaller companies with less to lose undercut their “search” market. Standards that prevent unsafe deployment fix this.
However, AI companies that don’t believe in AGI x-risk might tolerate higher x-risk than ideal safety standards by the lights of this community. Also, I think insurance contracts are unlikely to appropriately account for x-risk, if the market is anything to go by.