I’m thinking of a variant of Parfit’s Hitchhiker. Suppose the driver lets you in the car. When you get to the city, decision theory says not to pay.
To avoid that result, you can posit reputation-based justifications (protecting your own reputation, creating an incentive to rescue, etc). Or you can invoke third-party coercion (i.e. lawsuit for breach of contract). But I think it’s very plausible to assert that these mechanisms wouldn’t be relevant (it’s a big, anonymous city, rescuing hitchhikers from peril is sufficiently uncommon, how do you start a lawsuit against someone who just walks away and disappears from your life).
Yet I think most moral theories currently in practice say to pay despite being able to get away with not paying.
OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.
I’m thinking of a variant of Parfit’s Hitchhiker. Suppose the driver lets you in the car. When you get to the city, decision theory says not to pay.
To avoid that result, you can posit reputation-based justifications (protecting your own reputation, creating an incentive to rescue, etc). Or you can invoke third-party coercion (i.e. lawsuit for breach of contract). But I think it’s very plausible to assert that these mechanisms wouldn’t be relevant (it’s a big, anonymous city, rescuing hitchhikers from peril is sufficiently uncommon, how do you start a lawsuit against someone who just walks away and disappears from your life).
Yet I think most moral theories currently in practice say to pay despite being able to get away with not paying.
OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.