This approach ignores choice. To have an utility function is not enough to make a choice, and what I say is an act of making a choice.
For example, I have hidden value function (apples = 0.5 and oranges =0.5). I ask my home robot to bring me an apple. In that moment I made a choice between equally preferable preferences.
But my home robot would ignore my choice and bring me half of apple and half of orange, because this was my value function before making the choice.
In that case, I will be not satisfied as I will feel that robot ignores my moral efforts of making a choice, and I value my choices. Also, after making the choice my preferences will be updated, so the robot should decide which my utility function should be used: before the choice or after.
This approach ignores choice. To have an utility function is not enough to make a choice, and what I say is an act of making a choice.
For example, I have hidden value function (apples = 0.5 and oranges =0.5). I ask my home robot to bring me an apple. In that moment I made a choice between equally preferable preferences.
But my home robot would ignore my choice and bring me half of apple and half of orange, because this was my value function before making the choice.
In that case, I will be not satisfied as I will feel that robot ignores my moral efforts of making a choice, and I value my choices. Also, after making the choice my preferences will be updated, so the robot should decide which my utility function should be used: before the choice or after.
(I don’t think humans have consistent utility functions; we’re broken that way. If we did...)
The robot should know your utility function(s) well enough to know that you’d choose apple this time, and orange at some future time.