values should not just be a variable in utility function
All else being equal for me, I’d rather other people have their values get satisfied. So their values contribute to my utility function. If we model this as their utility contributing to my utility function, then we get mutual recursion, but we can also model this as each utility function having a direct and an indirect component, where the indirect components are aggregations of the direct components of other people’s utility functions, avoiding the recursion.
If they’re relegated to a variable, that seems to go against the original stated goal of wanting moral progress.
To be more specific, people can value society’s values coming more closely in line with their own values, or their own values coming more closely in line with what they would value if they thought about it more, or society’s values moving in the direction they would naturally without the intervention of an AI, etc. Situations in which someone wants their own values to change in a certain way can be modeled as an indirect component to the utility function, as above.
All else being equal for me, I’d rather other people have their values get satisfied. So their values contribute to my utility function. If we model this as their utility contributing to my utility function, then we get mutual recursion, but we can also model this as each utility function having a direct and an indirect component, where the indirect components are aggregations of the direct components of other people’s utility functions, avoiding the recursion.
To be more specific, people can value society’s values coming more closely in line with their own values, or their own values coming more closely in line with what they would value if they thought about it more, or society’s values moving in the direction they would naturally without the intervention of an AI, etc. Situations in which someone wants their own values to change in a certain way can be modeled as an indirect component to the utility function, as above.