Having a high quality experience at all times other than Tuesdays seems to be a strange goal, but one that a person could coherently optimize for (given a suitable meaning of “high quality experience). The problem with Future Tuesday Indifference is that at different times, the person places different values on the same experience on the same Tuesday.
Yeah, I see that Future Tuesday Indifference is a bad example. Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn’t seem right. But Future Tuesday Indifference would involve the sort of preference switching you see with hyperbolic discounting, which is more obviously irrational and might be confounding intuitions in this case.
So here’s a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There’s no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.
Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn’t seem right.
Discounting of future goods does not involve assigning different values to the same goods at the same time.
So here’s a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There’s no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.
I would not criticize this goal for being “irrational”, though I would oppose it because it conflicts with my own goals. My opposition is not because it is arbitrary, I am perfectly happy with arbitrariness in goal systems that aligns with my own goals.
Discounting of future goods does not involve assigning different values to the same goods at the same time.
The qualifier “at the same time” is ambiguous here.
If you mean that different values are assigned at the same time, so that the agent has conflicting utilities for a goal at a single time, then you’re right that discounting does not involve this. But neither does Future Tuesday Indifference,. so I don’t see the relevance.
If “at the same time” is meant to modify “the same goods”, so that what you’re saying is that discounting does not involve assigning different values to “good-g-at-time-t”, then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to “good-g-at-time-t”.
If “at the same time” is meant to modify “the same goods”, so that what you’re saying is that discounting does not involve assigning different values to “good-g-at-time-t”, then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to “good-g-at-time-t”.
Suppose an agend with exponential time discounting assigns goods G at a time T a utility of U0(G)*exp(a*(T0-T)). Then that is the utility the agent at any time assigns those goods at that time. You may be thinking that the agent at time TA assigns a utility to the goods G at the same time T of U0(G)*exp(a*(TA-T)) and thus the agent at different times is assigning different utilities, but these utility functions differ only by the constant (over states of the universe) factor exp(a*(TA-T0)), which being an affine transformation, doesn’t matter. The discounting agent’s equivalency class of utility functions representing its values really is constant over the agent’s subjective time.
Having a high quality experience at all times other than Tuesdays seems to be a strange goal, but one that a person could coherently optimize for (given a suitable meaning of “high quality experience). The problem with Future Tuesday Indifference is that at different times, the person places different values on the same experience on the same Tuesday.
Yeah, I see that Future Tuesday Indifference is a bad example. Not precisely for the reason you give, though, because that would also entail that any discounting of future goods is irrational, and that doesn’t seem right. But Future Tuesday Indifference would involve the sort of preference switching you see with hyperbolic discounting, which is more obviously irrational and might be confounding intuitions in this case.
So here’s a better example: a person only assigns value to the lives of people who were born within a five-mile radius of the Leaning Tower of Pisa. This is an ultimate value, not an instrumental one. There’s no obvious incoherence involved here. A person could coherently optimize for this goal. But my point is that this does not exhaust our avenues for rational criticism of goals. The fact that this person has an ultimate value that relies on such a highly specific and arbitrary distinction is grounds for criticism, just as it would be if the person adopted a scientific theory which (despite being empirically adequate) postulated such a distinction.
Discounting of future goods does not involve assigning different values to the same goods at the same time.
I would not criticize this goal for being “irrational”, though I would oppose it because it conflicts with my own goals. My opposition is not because it is arbitrary, I am perfectly happy with arbitrariness in goal systems that aligns with my own goals.
The qualifier “at the same time” is ambiguous here.
If you mean that different values are assigned at the same time, so that the agent has conflicting utilities for a goal at a single time, then you’re right that discounting does not involve this. But neither does Future Tuesday Indifference,. so I don’t see the relevance.
If “at the same time” is meant to modify “the same goods”, so that what you’re saying is that discounting does not involve assigning different values to “good-g-at-time-t”, then this is false. Depending on the time at which the valuation is made, discounting entails that different values will be assigned to “good-g-at-time-t”.
Suppose an agend with exponential time discounting assigns goods G at a time T a utility of U0(G)*exp(a*(T0-T)). Then that is the utility the agent at any time assigns those goods at that time. You may be thinking that the agent at time TA assigns a utility to the goods G at the same time T of U0(G)*exp(a*(TA-T)) and thus the agent at different times is assigning different utilities, but these utility functions differ only by the constant (over states of the universe) factor exp(a*(TA-T0)), which being an affine transformation, doesn’t matter. The discounting agent’s equivalency class of utility functions representing its values really is constant over the agent’s subjective time.
Ah, I see. You’re right. Comment retracted.