One question that keeps kicking around in my mind is that if someone’s true but unstated objection to the problem of AI risk is that superintelligence will never happen, how do you change their mind?
Note that superintelligence doesn’t by itself provide much of a risk. It is extreme superintelligence, together with variants of the orthogonality thesis and an intelligence that is able to rapidly achieve its superintelligence. The first two of these seem to be much easier to convince people of than the third, which shouldn’t be that surprising because the third is really the most questionable. (At the same time there seems to be a hard core of people who absolutely won’t budge on orthogonality. I disagree with such people on such fundamental intuitions and other issues that I’m not sure I can model well what they are thinking.)
The orthogonality thesis, in the form “you can’t get an ought from an is”, is widely accepted or at least widely considered a popular position in public discourse.
It is true that slow superintelligence is less risky, but that argument isn’t explicitly made in this letter.
Note that superintelligence doesn’t by itself provide much of a risk. It is extreme superintelligence, together with variants of the orthogonality thesis and an intelligence that is able to rapidly achieve its superintelligence. The first two of these seem to be much easier to convince people of than the third, which shouldn’t be that surprising because the third is really the most questionable. (At the same time there seems to be a hard core of people who absolutely won’t budge on orthogonality. I disagree with such people on such fundamental intuitions and other issues that I’m not sure I can model well what they are thinking.)
The orthogonality thesis, in the form “you can’t get an ought from an is”, is widely accepted or at least widely considered a popular position in public discourse.
It is true that slow superintelligence is less risky, but that argument isn’t explicitly made in this letter.