It seems uncontroversial that a substantial amount of behavior that society labels as altruistic (i.e. self-sacrificing) can be justified by decision theoretic concepts like reputation and such. For example, the “altruistic” behavior of bonobos is strong evidence to me that more altruism can be justified by decision theory than I know decision theory. (Obviously, this assumes that bonobo behavior is as de Waal describes).
Still, I have an intuition that human morality cannot be completely justified on the basis of decision theory. Yes, superrationality and such, but that’s not mathematically rigorous AFAIK and thus is susceptible to being used as a just-so story.
Does anyone else have this intuition? Can the sense that morality is more than game theory be justified by evidence or formal logic?
Fair enough. But I still have the intuition that a common property of moral theories is a commitment to instrumental values that require decisions different from those recommended by game theory.
One response is to assert that game theory is about maximizing utility, so any apparent contradiction between game theory and your values arises solely out of your confusion about the correct calculation of your utility function (i.e. the value should adjust the utility pay-out so that game theory recommends the decision that is consistent with your values). I find this answer unsatisfying, but I’m not sure if the dissatisfaction is rational.
Yes, lots of other people have the intuition that human morality requires more than decision theory to justify it. For example, it’s a common belief among several sorts of theists that one cannot have morality without some form of divine intervention.
I suspect we’re still talking past each other. Perhaps it will help to be concrete.
Can you give me an illustrative example of a situation where decision theory calls for a decision, where your intuition is that moral theories should/might/can call for a different decision?
I’m thinking of a variant of Parfit’s Hitchhiker. Suppose the driver lets you in the car. When you get to the city, decision theory says not to pay.
To avoid that result, you can posit reputation-based justifications (protecting your own reputation, creating an incentive to rescue, etc). Or you can invoke third-party coercion (i.e. lawsuit for breach of contract). But I think it’s very plausible to assert that these mechanisms wouldn’t be relevant (it’s a big, anonymous city, rescuing hitchhikers from peril is sufficiently uncommon, how do you start a lawsuit against someone who just walks away and disappears from your life).
Yet I think most moral theories currently in practice say to pay despite being able to get away with not paying.
OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.
It seems uncontroversial that a substantial amount of behavior that society labels as altruistic (i.e. self-sacrificing) can be justified by decision theoretic concepts like reputation and such. For example, the “altruistic” behavior of bonobos is strong evidence to me that more altruism can be justified by decision theory than I know decision theory. (Obviously, this assumes that bonobo behavior is as de Waal describes).
Still, I have an intuition that human morality cannot be completely justified on the basis of decision theory. Yes, superrationality and such, but that’s not mathematically rigorous AFAIK and thus is susceptible to being used as a just-so story.
Does anyone else have this intuition? Can the sense that morality is more than game theory be justified by evidence or formal logic?
Morality is a goal, like making paperclips. That doesn’t follow from game-theoretic considerations.
Fair enough. But I still have the intuition that a common property of moral theories is a commitment to instrumental values that require decisions different from those recommended by game theory.
One response is to assert that game theory is about maximizing utility, so any apparent contradiction between game theory and your values arises solely out of your confusion about the correct calculation of your utility function (i.e. the value should adjust the utility pay-out so that game theory recommends the decision that is consistent with your values). I find this answer unsatisfying, but I’m not sure if the dissatisfaction is rational.
Yes, lots of other people have the intuition that human morality requires more than decision theory to justify it. For example, it’s a common belief among several sorts of theists that one cannot have morality without some form of divine intervention.
I wasn’t clear. My question wasn’t about the justifications so much as the implications of morality.
In other words, is it a common property of moral theories that they call for different decisions than called for by decision theory?
I suspect we’re still talking past each other. Perhaps it will help to be concrete.
Can you give me an illustrative example of a situation where decision theory calls for a decision, where your intuition is that moral theories should/might/can call for a different decision?
I’m thinking of a variant of Parfit’s Hitchhiker. Suppose the driver lets you in the car. When you get to the city, decision theory says not to pay.
To avoid that result, you can posit reputation-based justifications (protecting your own reputation, creating an incentive to rescue, etc). Or you can invoke third-party coercion (i.e. lawsuit for breach of contract). But I think it’s very plausible to assert that these mechanisms wouldn’t be relevant (it’s a big, anonymous city, rescuing hitchhikers from peril is sufficiently uncommon, how do you start a lawsuit against someone who just walks away and disappears from your life).
Yet I think most moral theories currently in practice say to pay despite being able to get away with not paying.
OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.