We need names for these positions; I’d use hyper-rationalist but I think that’s slightly different. Perhaps a consequentialist does whatever has the maximum expected utility at any given moment, and a meta-consequentialist is a machine built by a consequentialist which is expected to achieve the maximum overall utility at least in part through being trustworthy to keep commitments a pure consequentialist would not be able to keep.
I guess I’m not sure why people are so interested in this class of problems. If you substitute Clippy for my lift, and up the stakes to a billion lives lost later in return for two billion saved now, there you have a problem, but when it’s human beings on a human scale there are good ordinary consequentialist reasons to honour such bargains, and those reasons are enough for the driver to trust my commitment. Does anyone really anticipate a version of this situation arising in which only a meta-consequentialist wins, and if so can you describe it?
I do think these problems are mostly useful for purposes of understanding and (moreso) defining rationality (“rationality”), which is perhaps a somewhat dubious use. But look how much time we’re spending on it.
(I wasn’t meaning to suggest that you’re crazy, but I did wonder about … hmm, not sure whether there’s a standard name for it. Being less prepared to spend X to get Y on account of having done so before and then lost Y. A sort of converse to the endowment effect.)
Thank you, I too was curious.
We need names for these positions; I’d use hyper-rationalist but I think that’s slightly different. Perhaps a consequentialist does whatever has the maximum expected utility at any given moment, and a meta-consequentialist is a machine built by a consequentialist which is expected to achieve the maximum overall utility at least in part through being trustworthy to keep commitments a pure consequentialist would not be able to keep.
I guess I’m not sure why people are so interested in this class of problems. If you substitute Clippy for my lift, and up the stakes to a billion lives lost later in return for two billion saved now, there you have a problem, but when it’s human beings on a human scale there are good ordinary consequentialist reasons to honour such bargains, and those reasons are enough for the driver to trust my commitment. Does anyone really anticipate a version of this situation arising in which only a meta-consequentialist wins, and if so can you describe it?
I do think these problems are mostly useful for purposes of understanding and (moreso) defining rationality (“rationality”), which is perhaps a somewhat dubious use. But look how much time we’re spending on it.
I very much recommend Reasons and Persons, by the way. A friend stole my copy and I miss it all the time.
OK, thanks!
Your friend stole a book on moral philosophy? That’s pretty special!
It seems ethics books are more likely to be stolen.
It’s still in print and readily available. If you really miss it all the time, why haven’t you bought another copy?
It’s $45 from Amazon. At that price, I’m going to scheme to steal it back first.
OR MAYBE IT’S BECAUSE I’M CRAAAZY AND DON’T ACT FOR REASONS!
Gosh. It’s only £17 in the UK.
(I wasn’t meaning to suggest that you’re crazy, but I did wonder about … hmm, not sure whether there’s a standard name for it. Being less prepared to spend X to get Y on account of having done so before and then lost Y. A sort of converse to the endowment effect.)
Mental accounting has that effect in the short run, but seems unlikely to apply here.