Why do you think there are many such operators? Do you believe the concept of “utility function of an agent” is ill-defined (assuming the “agent” is actually an intelligent agent rather than e.g. a rock)? Do you think it is possible to interpret a paperclip maximizer as having a utility function other than maximizing paperclips?
Deducing the correct utility of a utility maximiser is one thing (which has a low level of uncertainty, higher if the agent is hiding stuff). Assigning a utility to an agent that doesn’t have one is quite another.
Why do you think there are many such operators? Do you believe the concept of “utility function of an agent” is ill-defined (assuming the “agent” is actually an intelligent agent rather than e.g. a rock)? Do you think it is possible to interpret a paperclip maximizer as having a utility function other than maximizing paperclips?
Deducing the correct utility of a utility maximiser is one thing (which has a low level of uncertainty, higher if the agent is hiding stuff). Assigning a utility to an agent that doesn’t have one is quite another.
See http://lesswrong.com/lw/6ha/the_blueminimizing_robot/ Key quote:
Replied in the other thread.