It’s a test case for rationality as pure self-interest (really it’s like an altruistic version of the game of Chicken).
Suppose I’m purely selfish and stranded on a road at night. A motorist pulls over and offers to take me home for $100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay $100 when I get home.
But when I get home, the marginal benefit to paying $100 is zero (under assumption of pure selfishness). Therefore if I behave rationally at the margin when I get home, I cannot keep my promise.
I am better off overall if I can commit in advance to keeping my promise. In other words, I am better off overall if I have a disposition which sometimes causes me to behave irrationally at the margin. Under the self-interest notion of rationality, then, it is rational, at the margin of choosing your disposition, to choose a disposition which is not rational under the self-interest notion of rationality. (This is what Parfit describes as an “indirectly self-defeating” result; note that being indirectly self-defeating is not a knockdown argument against a position.)
I think it is both right and expected-utility-maximizing to promise pay the $100, right to pay the $100, and not expected-utility-maximizing to pay the $100 under standard assumptions of you’ll never see the driver again or whatnot.
You’re assuming it does no damage to oneself to break one’s own promises. Virtue theorists would disagree.
Breaking one’s promises damages one’s integrity—whether you consider that a trait of character or merely a valuable fact about yourself, you will lose something by breaking your promise even if you never see the fellow again.
Your argument is equivalent to, “But what if your utility function rates keeping promises higher than a million orgasms, what then?”
The hypo is meant to be a very simple model, because simple models are useful. It includes two goods: getting home, and having $100. Any other speculative values that a real person might or might not have are distractions.
Except that you mention both persons and promises in the hypothetical example, so both things factor into the correct decision. If you said that it’s not a person making the decision, or that there’s no promising involved, then you could discount integrity.
Yes, this seems unimpeachable. The missing piece is, rational at what margin? Once you are home, it is not rational at the margin to pay the $100 you promised.
This assumes no one can ever find out you didn’t pay, as well. In general, though, it seems better to assume everything will eventually be found out by everyone. This seems like enough, by itself, to keep promises and avoid most lies.
We need names for these positions; I’d use hyper-rationalist but I think that’s slightly different. Perhaps a consequentialist does whatever has the maximum expected utility at any given moment, and a meta-consequentialist is a machine built by a consequentialist which is expected to achieve the maximum overall utility at least in part through being trustworthy to keep commitments a pure consequentialist would not be able to keep.
I guess I’m not sure why people are so interested in this class of problems. If you substitute Clippy for my lift, and up the stakes to a billion lives lost later in return for two billion saved now, there you have a problem, but when it’s human beings on a human scale there are good ordinary consequentialist reasons to honour such bargains, and those reasons are enough for the driver to trust my commitment. Does anyone really anticipate a version of this situation arising in which only a meta-consequentialist wins, and if so can you describe it?
I do think these problems are mostly useful for purposes of understanding and (moreso) defining rationality (“rationality”), which is perhaps a somewhat dubious use. But look how much time we’re spending on it.
(I wasn’t meaning to suggest that you’re crazy, but I did wonder about … hmm, not sure whether there’s a standard name for it. Being less prepared to spend X to get Y on account of having done so before and then lost Y. A sort of converse to the endowment effect.)
+1 for “Rationalists win”. What is Parfit’s Hitchhiker? I couldn’t find an answer on Google.
It’s a test case for rationality as pure self-interest (really it’s like an altruistic version of the game of Chicken).
Suppose I’m purely selfish and stranded on a road at night. A motorist pulls over and offers to take me home for $100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay $100 when I get home.
But when I get home, the marginal benefit to paying $100 is zero (under assumption of pure selfishness). Therefore if I behave rationally at the margin when I get home, I cannot keep my promise.
I am better off overall if I can commit in advance to keeping my promise. In other words, I am better off overall if I have a disposition which sometimes causes me to behave irrationally at the margin. Under the self-interest notion of rationality, then, it is rational, at the margin of choosing your disposition, to choose a disposition which is not rational under the self-interest notion of rationality. (This is what Parfit describes as an “indirectly self-defeating” result; note that being indirectly self-defeating is not a knockdown argument against a position.)
Ah, thanks. I’m of the school of thought that says it is rational both to promise to pay the $100, and to have a policy of keeping promises.
I think it is both right and expected-utility-maximizing to promise pay the $100, right to pay the $100, and not expected-utility-maximizing to pay the $100 under standard assumptions of you’ll never see the driver again or whatnot.
You’re assuming it does no damage to oneself to break one’s own promises. Virtue theorists would disagree.
Breaking one’s promises damages one’s integrity—whether you consider that a trait of character or merely a valuable fact about yourself, you will lose something by breaking your promise even if you never see the fellow again.
Your argument is equivalent to, “But what if your utility function rates keeping promises higher than a million orgasms, what then?”
The hypo is meant to be a very simple model, because simple models are useful. It includes two goods: getting home, and having $100. Any other speculative values that a real person might or might not have are distractions.
Simple models are fine as long as we don’t forget they are only approximations. Rationalists should win in the real world.
Except that you mention both persons and promises in the hypothetical example, so both things factor into the correct decision. If you said that it’s not a person making the decision, or that there’s no promising involved, then you could discount integrity.
Yes, this seems unimpeachable. The missing piece is, rational at what margin? Once you are home, it is not rational at the margin to pay the $100 you promised.
This assumes no one can ever find out you didn’t pay, as well. In general, though, it seems better to assume everything will eventually be found out by everyone. This seems like enough, by itself, to keep promises and avoid most lies.
Right. The question of course is, “better” for what purpose? Which model is better depends on what you’re trying to figure out.
Thank you, I too was curious.
We need names for these positions; I’d use hyper-rationalist but I think that’s slightly different. Perhaps a consequentialist does whatever has the maximum expected utility at any given moment, and a meta-consequentialist is a machine built by a consequentialist which is expected to achieve the maximum overall utility at least in part through being trustworthy to keep commitments a pure consequentialist would not be able to keep.
I guess I’m not sure why people are so interested in this class of problems. If you substitute Clippy for my lift, and up the stakes to a billion lives lost later in return for two billion saved now, there you have a problem, but when it’s human beings on a human scale there are good ordinary consequentialist reasons to honour such bargains, and those reasons are enough for the driver to trust my commitment. Does anyone really anticipate a version of this situation arising in which only a meta-consequentialist wins, and if so can you describe it?
I do think these problems are mostly useful for purposes of understanding and (moreso) defining rationality (“rationality”), which is perhaps a somewhat dubious use. But look how much time we’re spending on it.
I very much recommend Reasons and Persons, by the way. A friend stole my copy and I miss it all the time.
OK, thanks!
Your friend stole a book on moral philosophy? That’s pretty special!
It seems ethics books are more likely to be stolen.
It’s still in print and readily available. If you really miss it all the time, why haven’t you bought another copy?
It’s $45 from Amazon. At that price, I’m going to scheme to steal it back first.
OR MAYBE IT’S BECAUSE I’M CRAAAZY AND DON’T ACT FOR REASONS!
Gosh. It’s only £17 in the UK.
(I wasn’t meaning to suggest that you’re crazy, but I did wonder about … hmm, not sure whether there’s a standard name for it. Being less prepared to spend X to get Y on account of having done so before and then lost Y. A sort of converse to the endowment effect.)
Mental accounting has that effect in the short run, but seems unlikely to apply here.