Screwing over your future selves because of hyperbolic discounting, or other people because of scope insensitivity, isn’t obviously a failure of instrumental rationality except insofar as one is defecting in a Prisoner’s Dilemma (which often isn’t so) and rationality counts against that.
Those ‘biases’ look essential to the shapes of our utility functions, to the extent that we have them.
Screwing over other people because of scope insensitivity is a failure of instrumental rationality if (and not only if) you also believe that the importance of someone’s not being screwed over does not depend strongly on what happens to people unconnected to that person.
Steve, once people are made aware of larger scopes, they are less willing to pay the same amount of money to have effects with smaller scopes. See the references at this OB post.
How much less willing? Suppose A would give up only a million times more utility to save B and 10^100 other people than to save B. Would A, if informed of the existence of 10^100 people, really choose not to save B alone at the price of a cent? It seems to me that would have to be the case if scope insensitivity were to be rational. (This isn’t my true objection, which I’m not sure how to verbalize at the moment.)
Thanks for the link, although it’s addressing related but different issues. A hyperbolic discounter can assent to ‘locking in’ a fixed mapping of times and discount factors in place of the indexical one. Then the future selves will agree about the relative value of stuff happening at different times, placing highest value on the period right after the lock-in.
Screwing over your future selves because of hyperbolic discounting, or other people because of scope insensitivity, isn’t obviously a failure of instrumental rationality except insofar as one is defecting in a Prisoner’s Dilemma (which often isn’t so) and rationality counts against that.
Those ‘biases’ look essential to the shapes of our utility functions, to the extent that we have them.
Screwing over other people because of scope insensitivity is a failure of instrumental rationality if (and not only if) you also believe that the importance of someone’s not being screwed over does not depend strongly on what happens to people unconnected to that person.
Steve, once people are made aware of larger scopes, they are less willing to pay the same amount of money to have effects with smaller scopes. See the references at this OB post.
How much less willing? Suppose A would give up only a million times more utility to save B and 10^100 other people than to save B. Would A, if informed of the existence of 10^100 people, really choose not to save B alone at the price of a cent? It seems to me that would have to be the case if scope insensitivity were to be rational. (This isn’t my true objection, which I’m not sure how to verbalize at the moment.)
This issue deserves a main post. Cf. also Michael Wilson on “Normative reasoning: a Siren Song?”
Thanks for the link, although it’s addressing related but different issues. A hyperbolic discounter can assent to ‘locking in’ a fixed mapping of times and discount factors in place of the indexical one. Then the future selves will agree about the relative value of stuff happening at different times, placing highest value on the period right after the lock-in.