Arguments against AI risk, .or arguments against the MIRI conception of AI risk?
I have heard a hint of a whisper of a rumour that I am considered a bit of a contrarian around here...but I am actually a little more convinced of AI threat in general than I used be before I encountered less wrong. (in particular, at one time, I would have said “just pull the plug out”, but there’s some mileage in the unknowing arguments)
The short version of the arguments against MIRIs version of AI threat is that it is highly conjunctive. The long version is long.
a consequence of having a multi stage argument, with a fan out of alternative possibilities at each stage.
Arguments against AI risk, .or arguments against the MIRI conception of AI risk?
I have heard a hint of a whisper of a rumour that I am considered a bit of a contrarian around here...but I am actually a little more convinced of AI threat in general than I used be before I encountered less wrong. (in particular, at one time, I would have said “just pull the plug out”, but there’s some mileage in the unknowing arguments)
The short version of the arguments against MIRIs version of AI threat is that it is highly conjunctive. The long version is long. a consequence of having a multi stage argument, with a fan out of alternative possibilities at each stage.
For an argument against at least some of MIRI’s technical agenda, see Paul Christiano’s medium post.