They talk about AGI a bunch and end up triggering an AGI arms race.
AI doesn’t explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.
The future is just way harder to predict than everyone thought it would be… we’re cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn’t have possibly forseen.
MIRI failure modes that all seem likely to me:
They talk about AGI a bunch and end up triggering an AGI arms race.
AI doesn’t explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.
The future is just way harder to predict than everyone thought it would be… we’re cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn’t have possibly forseen.
Uploads come first.