It wasn’t meant as a reply to a particular thing—mainly I’m flagging this as an AI-risk analogy I like.
On that theme, one thing “we don’t know if the nukes will ignite the atmosphere” has in common with AI-risk is that the risk is from reaching new configurations (e.g. temperatures of the sort you get out of a nuclear bomb inside the Earth’s atmosphere) that we don’t have experience with. Which is an entirely different question than “what happens with the nukes after we don’t ignite the atmosphere in a test explosion”.
I like thinking about coordination from this viewpoint.
It wasn’t meant as a reply to a particular thing—mainly I’m flagging this as an AI-risk analogy I like.
On that theme, one thing “we don’t know if the nukes will ignite the atmosphere” has in common with AI-risk is that the risk is from reaching new configurations (e.g. temperatures of the sort you get out of a nuclear bomb inside the Earth’s atmosphere) that we don’t have experience with. Which is an entirely different question than “what happens with the nukes after we don’t ignite the atmosphere in a test explosion”.
I like thinking about coordination from this viewpoint.