I guess this falls into the category of “Well, we’ll deal with that problem when it comes up”, but I’d imagine when a human preference in a particular dilemma is undefined or even just highly uncertain, one can often defer to other rules like—rather than maximize an uncertain preference, default to maximizing the human’s agency, in scenarios where preference is unclear, even if this predictably leads to less-than-optimal preference satisfaction.
I guess this falls into the category of “Well, we’ll deal with that problem when it comes up”, but I’d imagine when a human preference in a particular dilemma is undefined or even just highly uncertain, one can often defer to other rules like—rather than maximize an uncertain preference, default to maximizing the human’s agency, in scenarios where preference is unclear, even if this predictably leads to less-than-optimal preference satisfaction.