We tried to frame the discussion internally, i.e. without making additional assumptions that people may or may not agree with (e.g. moral realism). If we did the job right, the assumptions made in the argument are in the ‘singularity claim’ and the ‘orthogonality thesis’ - and there the dilemma is that we need an assumption in the one (general intelligence in the singularity claim) that we must reject in the other (the orthogonality thesis).
What we do say (see figure 1) is that two combinations are inconsistent:
a) general intelligence + orthogonality
b) instrumental intelligence + existential risk
So if one wants to keep the ‘standard argument’, one would have to argue that one of these two, a) or b) are fine.
We tried to frame the discussion internally, i.e. without making additional assumptions that people may or may not agree with (e.g. moral realism). If we did the job right, the assumptions made in the argument are in the ‘singularity claim’ and the ‘orthogonality thesis’ - and there the dilemma is that we need an assumption in the one (general intelligence in the singularity claim) that we must reject in the other (the orthogonality thesis).
What we do say (see figure 1) is that two combinations are inconsistent:
a) general intelligence + orthogonality
b) instrumental intelligence + existential risk
So if one wants to keep the ‘standard argument’, one would have to argue that one of these two, a) or b) are fine.