Hypothetically suppose the following (throughout, assume “AI” stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100. 2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us. 3) if we develop FAI before UFAI and before 2100, FAI saves us. 4) FAI isn’t particularly harder to build than UFAI is.
Given those premises, it’s true that UFAI isn’t a major existential risk, in that even if we do nothing about it, UFAI won’t kill us. But it’s also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I’m not seeing?
Hypothetically suppose the following (throughout, assume “AI” stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn’t particularly harder to build than UFAI is.
Given those premises, it’s true that UFAI isn’t a major existential risk, in that even if we do nothing about it, UFAI won’t kill us. But it’s also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I’m not seeing?
No, you’re right. thomblake makes the same point. I just wasn’t thinking carefully enough. Thanks!