OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.
OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.