I’d be willing to bet money that any formalization of “preference” that you invent, short of encoding the whole world into it, will still describe a property that some humans do modify within themselves. So we aren’t locked in, but your AIs will be.
Do humans modify that property, or find it desirable to modify it? The distinction between factual and normative is very important here, since we are talking about preference, the pure normative. If humans prefer different preference from a given one, they do so in some lawful way, according to some preference criterion (that they hold in their minds). All such meta-steps should be included. (Of course, it might prove impossible to formalize in practice.)
As for the “encoding the whole world” part, it’s the ontology problem, and I’m pretty sure that it’s enough to encode preference about strategy (external behavior, given all possible observations) of a given concrete agent, to preserve all of human preference. Preference about external world or the way the agent works on the inside is not required.
I’d be willing to bet money that any formalization of “preference” that you invent, short of encoding the whole world into it, will still describe a property that some humans do modify within themselves. So we aren’t locked in, but your AIs will be.
Do humans modify that property, or find it desirable to modify it? The distinction between factual and normative is very important here, since we are talking about preference, the pure normative. If humans prefer different preference from a given one, they do so in some lawful way, according to some preference criterion (that they hold in their minds). All such meta-steps should be included. (Of course, it might prove impossible to formalize in practice.)
As for the “encoding the whole world” part, it’s the ontology problem, and I’m pretty sure that it’s enough to encode preference about strategy (external behavior, given all possible observations) of a given concrete agent, to preserve all of human preference. Preference about external world or the way the agent works on the inside is not required.