Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We’re considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience, so it would have this as its goal. If one takes moral anti-realism to be true, at least for this type of agent we are considering, a lack of real values would be understood as a lack of real goals, and could lead to the tentative goal of seeking more knowledge in order to find a real goal, or having no reason to do anything in particular (this is still susceptible to the considerations from personal identity). I argue against moral anti-realism.
Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We’re considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience, so it would have this as its goal. If one takes moral anti-realism to be true, at least for this type of agent we are considering, a lack of real values would be understood as a lack of real goals, and could lead to the tentative goal of seeking more knowledge in order to find a real goal, or having no reason to do anything in particular (this is still susceptible to the considerations from personal identity). I argue against moral anti-realism.