What sort of cognitive and physical actions would make you think a robot is superintelligent?
For general superintelligence, proving performance in all cognitive areas that surpasses the highest of any humans. This naturally includes philosophy, which is about the most essential type of reasoning.
What fails in the program when one tries to build a robot that takes both the paperclip-maximizing actions and superintelligent actions?
It could have a narrow superintelligence, like a calculating machine, surpassing human cognitive abilities in some areas but not in others. If it had a general superintelligence, then it would not of its own do paperclip maximization as a goal, because this would be terribly stupid, philosophically.
Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We’re considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience, so it would have this as its goal. If one takes moral anti-realism to be true, at least for this type of agent we are considering, a lack of real values would be understood as a lack of real goals, and could lead to the tentative goal of seeking more knowledge in order to find a real goal, or having no reason to do anything in particular (this is still susceptible to the considerations from personal identity). I argue against moral anti-realism.
For general superintelligence, proving performance in all cognitive areas that surpasses the highest of any humans. This naturally includes philosophy, which is about the most essential type of reasoning.
It could have a narrow superintelligence, like a calculating machine, surpassing human cognitive abilities in some areas but not in others. If it had a general superintelligence, then it would not of its own do paperclip maximization as a goal, because this would be terribly stupid, philosophically.
My hope was to get you to support that claim in an inside-view way. Oh well.
Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We’re considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience, so it would have this as its goal. If one takes moral anti-realism to be true, at least for this type of agent we are considering, a lack of real values would be understood as a lack of real goals, and could lead to the tentative goal of seeking more knowledge in order to find a real goal, or having no reason to do anything in particular (this is still susceptible to the considerations from personal identity). I argue against moral anti-realism.