It’s best long-term way, probably. But if you estimate it’ll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI—or you’re likely to never reach the FAI stage.
In 7 billions of humans, it’s sane to have some individual to focus on FAI now, since it’s a hard problem, so we have to start early; but it’s also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.
Hypothetically suppose the following (throughout, assume “AI” stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100. 2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us. 3) if we develop FAI before UFAI and before 2100, FAI saves us. 4) FAI isn’t particularly harder to build than UFAI is.
Given those premises, it’s true that UFAI isn’t a major existential risk, in that even if we do nothing about it, UFAI won’t kill us. But it’s also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I’m not seeing?
I don’t. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don’t get FAI to solve them soon.
Don’t forget—even if unfriendly AI wasn’t a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.
It’s best long-term way, probably. But if you estimate it’ll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI—or you’re likely to never reach the FAI stage.
In 7 billions of humans, it’s sane to have some individual to focus on FAI now, since it’s a hard problem, so we have to start early; but it’s also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.
How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?
Hypothetically suppose the following (throughout, assume “AI” stands for significantly superhuman artificial general intelligence):
1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn’t particularly harder to build than UFAI is.
Given those premises, it’s true that UFAI isn’t a major existential risk, in that even if we do nothing about it, UFAI won’t kill us. But it’s also true that FAI is the best (indeed, the only) way to save us.
Are those premises internally contradictory in some way I’m not seeing?
No, you’re right. thomblake makes the same point. I just wasn’t thinking carefully enough. Thanks!
I don’t. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don’t get FAI to solve them soon.