Utility function isn’t even real, it’s a point in an equivalence class, and you only see the equivalence class. The choice of a particular point should affect the decisions no more than the epiphenomenal consciousness of Searle should affect how the meathead Searle writes his consciousness papers, or than the hidden abolute time should affect the timeless dynamic. State of the world is a different matter entirely. Only if for some reason your preferences include a term about specific form of utility function that is engraved on your mind, should this arbitrary factor matter (but then it won’t be exactly about the utility function).
I’m not sure I understand your final sentence, but I suspect we may just be using different senses of the word utility function. Insofar as I do understand you, I agree with you for utility-functions-defined-as-representations-of-preferences. It’s just that I would take utility-functions-defined-in-terms-of-well-being as the relevant informational base for any discussion of fairness. Preferences are not my primitives in this respect.
Let’s consider another agent with which you consider cooperating as an instrumental installation, not valued in itself, but only as a means of achieving your goals that lie elsewhere. Of such agent, you’re only interested in behavior. Preference is a specification of behavior, saying what the agent does in each given state of knowledge (under a simplifying assumption that the optimal action is always selected). How this preference is represented in that agent’s mind is irrelevant as it doesn’t influence its behavior, and so can’t matter for how you select a cooperative play with this agent.
How this preference is represented in that agent’s mind is irrelevant as it doesn’t influence its behavior, and so can’t matter for how you select a cooperative play with this agent.
In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I’d like to once again recommend Hervé Moulin’s Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I’m afraid. I’m not even sure whether I’d care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don’t know.)
The equivalence class of the utility function should be the set of monotonous function of a canonical element.
However, what von Neumann-Morgenstern shows under mild assumptions is that for each class of utility functions, there is a subset of utility functions generated by the affine transforms of a single canonical element for which you can make decisions by computing expected utility. Therefore, looking at the set of all affine transforms of such an utility function really is the same as looking at the whole class. Still, it doesn’t make utility commensurable.
Utility function isn’t even real, it’s a point in an equivalence class, and you only see the equivalence class. The choice of a particular point should affect the decisions no more than the epiphenomenal consciousness of Searle should affect how the meathead Searle writes his consciousness papers, or than the hidden abolute time should affect the timeless dynamic. State of the world is a different matter entirely. Only if for some reason your preferences include a term about specific form of utility function that is engraved on your mind, should this arbitrary factor matter (but then it won’t be exactly about the utility function).
I’m not sure I understand your final sentence, but I suspect we may just be using different senses of the word utility function. Insofar as I do understand you, I agree with you for utility-functions-defined-as-representations-of-preferences. It’s just that I would take utility-functions-defined-in-terms-of-well-being as the relevant informational base for any discussion of fairness. Preferences are not my primitives in this respect.
Let’s consider another agent with which you consider cooperating as an instrumental installation, not valued in itself, but only as a means of achieving your goals that lie elsewhere. Of such agent, you’re only interested in behavior. Preference is a specification of behavior, saying what the agent does in each given state of knowledge (under a simplifying assumption that the optimal action is always selected). How this preference is represented in that agent’s mind is irrelevant as it doesn’t influence its behavior, and so can’t matter for how you select a cooperative play with this agent.
Agreed. Which I think brings us back to it not really being about fairness.
In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I’d like to once again recommend Hervé Moulin’s Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I’m afraid. I’m not even sure whether I’d care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don’t know.)
The equivalence class of the utility function should be the set of monotonous function of a canonical element.
However, what von Neumann-Morgenstern shows under mild assumptions is that for each class of utility functions, there is a subset of utility functions generated by the affine transforms of a single canonical element for which you can make decisions by computing expected utility. Therefore, looking at the set of all affine transforms of such an utility function really is the same as looking at the whole class. Still, it doesn’t make utility commensurable.