You are parrying my example, but not the pattern it exemplifies (not speaking of the larger pattern of the point I’m arguing for). If certain people are insensitive to this particular kind of moral arguments, they are still bound to be sensitive to some moral arguments. Maybe the AI will generate recipes for extraordinarily tasty foods for your sociopaths or get-rich-fast schemes that actually work or magically beautiful music.
Indeed. The more thorough solution would seem to be “find a guardian possessing such an utility function that the AI has nothing to offer them that you can’t trump with a counter-offer”. The existence of such guardians would depend on the upper estimations of the AI’s capabilities and on their employer’s means, and would be subject to your ability to correctly assess a candidate’s utility function.
You are parrying my example, but not the pattern it exemplifies (not speaking of the larger pattern of the point I’m arguing for). If certain people are insensitive to this particular kind of moral arguments, they are still bound to be sensitive to some moral arguments. Maybe the AI will generate recipes for extraordinarily tasty foods for your sociopaths or get-rich-fast schemes that actually work or magically beautiful music.
Indeed. The more thorough solution would seem to be “find a guardian possessing such an utility function that the AI has nothing to offer them that you can’t trump with a counter-offer”. The existence of such guardians would depend on the upper estimations of the AI’s capabilities and on their employer’s means, and would be subject to your ability to correctly assess a candidate’s utility function.