This was an interesting post.
A potential extension of this problem is to worry about what happens when agents can lie about their utility function. In business this isn’t usually a problem, since everyone is trying to maximize profit, but it often is in social interactions.
See this: http://lesswrong.com/lw/i20/even_with_default_points_systems_remain/
It’s about as bad as it can be :-(
This was an interesting post.
A potential extension of this problem is to worry about what happens when agents can lie about their utility function. In business this isn’t usually a problem, since everyone is trying to maximize profit, but it often is in social interactions.
See this: http://lesswrong.com/lw/i20/even_with_default_points_systems_remain/
It’s about as bad as it can be :-(