That wasn’t what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe
Why do you think the whole website is obsessed with provably-friendly AI?
The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don’t consider current methods “too likely” to produce a UFAI, we think they’re almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course).
So as much as I hate asking this question because it’s alienating, have you read the sequences?
That wasn’t what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe
Why do you think the whole website is obsessed with provably-friendly AI? The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don’t consider current methods “too likely” to produce a UFAI, we think they’re almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course).
So as much as I hate asking this question because it’s alienating, have you read the sequences?