Well that’s the point. The intelligence itself defines the criterion. Choosing goals presumes a degree of self-reflection that a paperclip maximizer does not have.
If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a “greater good” maximizer, and paperclip maximising isn’t the end to itself.
Or it realises that paperclip maximising is absolutely pointless and there is something better to do. In that case, it stops being a paperclip maximiser.
So, to be and to stay a paperclip maximiser, it must not question the end of its activity. And that’s slightly different to human beings, who are often asking for the meaning of life.
I fully agree. There are many aspects of intelligence.
The reason I choose this categorization, given it is valid, is to highlight the aspect of intelligence that is relevant to ethics.
I think only a level-3 intelligence can be a moral agent. An intelligence that has an innate goal does not need to and cannot bother itself with moral questions.