It seems unlikely that AI labs are going to comply with this petition. Supposing that this is the case, does this petition help, hurt, or have no impact on AI safety, compared to the counterfactual where it doesn’t exist?
All possibilities seem plausible to me. Maybe it’s ignored so it just doesn’t matter. Maybe it burns political capital or establishes a norm of “everyone ignores those silly AI safety people and nothing bad happens”. Maybe it raises awareness and does important things for building the AI safety coalition.
Modeling social reality is always hard, but has there been much analysis of what messaging one ought to use here, separate from the question of what policies one ought to want?
It seems unlikely that AI labs are going to comply with this petition. Supposing that this is the case, does this petition help, hurt, or have no impact on AI safety, compared to the counterfactual where it doesn’t exist?
All possibilities seem plausible to me. Maybe it’s ignored so it just doesn’t matter. Maybe it burns political capital or establishes a norm of “everyone ignores those silly AI safety people and nothing bad happens”. Maybe it raises awareness and does important things for building the AI safety coalition.
Modeling social reality is always hard, but has there been much analysis of what messaging one ought to use here, separate from the question of what policies one ought to want?