Suppose you’re in middle school, and one day you learn that your teachers are planning a mandatory field trip, during which the entire grade will jump off of a skyscraper without a parachute. You approach a school administrator to talk to them about how dangerous that would be, and they say, “Don’t worry! We’ll all be wearing hard hats the entire time.”
Hearing that probably does not reassure you even a little bit, because hard hats alone would not nudge the probability of death below ~100%. It might actually make you more worried, because the fact that they have a prepared response means school administrators were aware of potential issues and then decided the hard hat solution was appropriate. It’s generally harder to argue someone out of believing in an incorrect solution to a problem, than into believing the problem exists in the first place.
This analogy overstates the obviousness of (and my personal confidence in) the risk, but to a lot of alignment researchers it’s an essentially accurate metaphor for how ineffective they think OpenAI’s current precautions will turn out in practice, even if making a doomsday AI feels like a more “understandable” mistake.
Suppose you’re in middle school, and one day you learn that your teachers are planning a mandatory field trip, during which the entire grade will jump off of a skyscraper without a parachute. You approach a school administrator to talk to them about how dangerous that would be, and they say, “Don’t worry! We’ll all be wearing hard hats the entire time.”
Hearing that probably does not reassure you even a little bit, because hard hats alone would not nudge the probability of death below ~100%. It might actually make you more worried, because the fact that they have a prepared response means school administrators were aware of potential issues and then decided the hard hat solution was appropriate. It’s generally harder to argue someone out of believing in an incorrect solution to a problem, than into believing the problem exists in the first place.
This analogy overstates the obviousness of (and my personal confidence in) the risk, but to a lot of alignment researchers it’s an essentially accurate metaphor for how ineffective they think OpenAI’s current precautions will turn out in practice, even if making a doomsday AI feels like a more “understandable” mistake.
Thank you! I think I understand this position a good deal more now.