I tend to agree with Eliezer that this is not really about fairness, but insofar as we’re playing the “what’s fair?” game...
Utilities of different players are classically treated as incomparable … thus we’d like the “fair point” to be invariant under affine recalibrations of utility scales.
Proclaiming incomparability has always struck me as elevating a practical problem (it’s difficult to compare utilities) to the level of a conceptual problem (it’s impossible to compare utilities). At a practical level, we compare utilities all the time. To take a somewhat extreme example, it seems pretty obvious that a speck of dust in Adam’s eye is less bad than Eve being tortured.
The implication of this is that I actively do not want the fair point to be invariant to affine tranformations of the utility scales. If one person is getting much more utility than someone else, that is relevant information to me and I do not want it thrown away.
NB: In the event that I did think that utility was incomparable in the way “classically” assumed, then wouldn’t the solution need to be invariant to monotone transformations of the utility function? Why should affine invariance suffice?
Ah. I was thinking of utility-as-a-thing-in-the-world (e.g. a pleasurable mental state) rather than utility-as-representation-of-preferences-over-gambles. (The latter would not be my preferred informational base for determining a fair outcome.)
The point was that applying positive affine transformation to utility doesn’t change preference, and so shouldn’t change the fair decision. Fairness is the way of comparing utility.
The point was that applying positive affine transformation to utility doesn’t change preference, and so shouldn’t change the fair decision.
I get that (although my NB assumed that we were talking about preferences over certain outcomes rather than lotteries). My point is that this doesn’t follow, because fairness can depend on things that may not affect preferences—like the fact that one player is already incredibly well off.
Utility function isn’t even real, it’s a point in an equivalence class, and you only see the equivalence class. The choice of a particular point should affect the decisions no more than the epiphenomenal consciousness of Searle should affect how the meathead Searle writes his consciousness papers, or than the hidden abolute time should affect the timeless dynamic. State of the world is a different matter entirely. Only if for some reason your preferences include a term about specific form of utility function that is engraved on your mind, should this arbitrary factor matter (but then it won’t be exactly about the utility function).
I’m not sure I understand your final sentence, but I suspect we may just be using different senses of the word utility function. Insofar as I do understand you, I agree with you for utility-functions-defined-as-representations-of-preferences. It’s just that I would take utility-functions-defined-in-terms-of-well-being as the relevant informational base for any discussion of fairness. Preferences are not my primitives in this respect.
Let’s consider another agent with which you consider cooperating as an instrumental installation, not valued in itself, but only as a means of achieving your goals that lie elsewhere. Of such agent, you’re only interested in behavior. Preference is a specification of behavior, saying what the agent does in each given state of knowledge (under a simplifying assumption that the optimal action is always selected). How this preference is represented in that agent’s mind is irrelevant as it doesn’t influence its behavior, and so can’t matter for how you select a cooperative play with this agent.
How this preference is represented in that agent’s mind is irrelevant as it doesn’t influence its behavior, and so can’t matter for how you select a cooperative play with this agent.
In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I’d like to once again recommend Hervé Moulin’s Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I’m afraid. I’m not even sure whether I’d care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don’t know.)
The equivalence class of the utility function should be the set of monotonous function of a canonical element.
However, what von Neumann-Morgenstern shows under mild assumptions is that for each class of utility functions, there is a subset of utility functions generated by the affine transforms of a single canonical element for which you can make decisions by computing expected utility. Therefore, looking at the set of all affine transforms of such an utility function really is the same as looking at the whole class. Still, it doesn’t make utility commensurable.
A speck in Adam’s eye vs Eve being tortured is not a utility comparison but a happiness comparison. Happiness is hard to compare but can be compared because it is a state, utility is an ordering function. There is no utility meter.
I tend to agree with Eliezer that this is not really about fairness, but insofar as we’re playing the “what’s fair?” game...
Proclaiming incomparability has always struck me as elevating a practical problem (it’s difficult to compare utilities) to the level of a conceptual problem (it’s impossible to compare utilities). At a practical level, we compare utilities all the time. To take a somewhat extreme example, it seems pretty obvious that a speck of dust in Adam’s eye is less bad than Eve being tortured.
The implication of this is that I actively do not want the fair point to be invariant to affine tranformations of the utility scales. If one person is getting much more utility than someone else, that is relevant information to me and I do not want it thrown away.
NB: In the event that I did think that utility was incomparable in the way “classically” assumed, then wouldn’t the solution need to be invariant to monotone transformations of the utility function? Why should affine invariance suffice?
Non-affine transformations break expected utility of lotteries over outcomes.
Ah. I was thinking of utility-as-a-thing-in-the-world (e.g. a pleasurable mental state) rather than utility-as-representation-of-preferences-over-gambles. (The latter would not be my preferred informational base for determining a fair outcome.)
The point was that applying positive affine transformation to utility doesn’t change preference, and so shouldn’t change the fair decision. Fairness is the way of comparing utility.
I get that (although my NB assumed that we were talking about preferences over certain outcomes rather than lotteries). My point is that this doesn’t follow, because fairness can depend on things that may not affect preferences—like the fact that one player is already incredibly well off.
Utility function isn’t even real, it’s a point in an equivalence class, and you only see the equivalence class. The choice of a particular point should affect the decisions no more than the epiphenomenal consciousness of Searle should affect how the meathead Searle writes his consciousness papers, or than the hidden abolute time should affect the timeless dynamic. State of the world is a different matter entirely. Only if for some reason your preferences include a term about specific form of utility function that is engraved on your mind, should this arbitrary factor matter (but then it won’t be exactly about the utility function).
I’m not sure I understand your final sentence, but I suspect we may just be using different senses of the word utility function. Insofar as I do understand you, I agree with you for utility-functions-defined-as-representations-of-preferences. It’s just that I would take utility-functions-defined-in-terms-of-well-being as the relevant informational base for any discussion of fairness. Preferences are not my primitives in this respect.
Let’s consider another agent with which you consider cooperating as an instrumental installation, not valued in itself, but only as a means of achieving your goals that lie elsewhere. Of such agent, you’re only interested in behavior. Preference is a specification of behavior, saying what the agent does in each given state of knowledge (under a simplifying assumption that the optimal action is always selected). How this preference is represented in that agent’s mind is irrelevant as it doesn’t influence its behavior, and so can’t matter for how you select a cooperative play with this agent.
Agreed. Which I think brings us back to it not really being about fairness.
In other words, conchis is taking a welfarist perspective on fairness, instead of a game theoretic one. (I’d like to once again recommend Hervé Moulin’s Fair Division and Collective Welfare which covers both of these approaches.)
In this case, the agents are self-modifying AIs. How do we measure and compare the well-being of such creatures? Do you have ideas or suggestions?
None, I’m afraid. I’m not even sure whether I’d care about their well-being even if I could conceive of what that would mean. (Maybe I would; I just don’t know.)
The equivalence class of the utility function should be the set of monotonous function of a canonical element.
However, what von Neumann-Morgenstern shows under mild assumptions is that for each class of utility functions, there is a subset of utility functions generated by the affine transforms of a single canonical element for which you can make decisions by computing expected utility. Therefore, looking at the set of all affine transforms of such an utility function really is the same as looking at the whole class. Still, it doesn’t make utility commensurable.
A speck in Adam’s eye vs Eve being tortured is not a utility comparison but a happiness comparison. Happiness is hard to compare but can be compared because it is a state, utility is an ordering function. There is no utility meter.