Deducing the correct utility of a utility maximiser is one thing (which has a low level of uncertainty, higher if the agent is hiding stuff). Assigning a utility to an agent that doesn’t have one is quite another.
See http://lesswrong.com/lw/6ha/the_blueminimizing_robot/ Key quote:
The robot is a behavior-executor, not a utility-maximizer.
Replied in the other thread.
Deducing the correct utility of a utility maximiser is one thing (which has a low level of uncertainty, higher if the agent is hiding stuff). Assigning a utility to an agent that doesn’t have one is quite another.
See http://lesswrong.com/lw/6ha/the_blueminimizing_robot/ Key quote:
Replied in the other thread.