What I mean is...the difference between “FAI is possible but difficult” and “FAI is impossible and all AI are uFAI” is like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Whereas “FOOM doesn’t happen and there is no reason to worry about AI so much” is analogous to “belief in afterlife is unfounded in the first place”. That″s a massively different idea.
In one case, you’re committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that “all AI are UFAI” is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you’re in the narrow subset. In the second, you want to overthrow the system.
What I mean is...the difference between “FAI is possible but difficult” and “FAI is impossible and all AI are uFAI” is like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Whereas “FOOM doesn’t happen and there is no reason to worry about AI so much” is analogous to “belief in afterlife is unfounded in the first place”. That″s a massively different idea.
In one case, you’re committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that “all AI are UFAI” is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you’re in the narrow subset. In the second, you want to overthrow the system.