There exists a set of maxims which all intelligent and social agents find it in their long-term interest
I can see how with that definition of morality it could be sensibly theorized as objective. I don’t think that sentence is true, as there are many people (e.g. suicide bombers) whose evaluations of their long-term interest are significant outliers from other agents.
I don’t think that sentence is true, as there are many people (e.g. suicide bombers) whose evaluations of their long-term interest are significant outliers from other agents.
That’s right, but this exception (people whose interests are served by violating the moral norm) itself has a large exception, which is that throughout most of the suicide bomber’s life, he (rightly) respects the moral norm. Bad people can’t be bad every second of their lives—they have to behave themselves the vast majority of the time if for no other reason than to survive until the next opportunity to be bad. The suicide bomber has no interest in surviving once he presses the button, but for every second of his life prior to that, he has an interest in surviving.
And the would-be eventual suicide bomber also, through most of his life, has no choice but to enforce moral behavior in others if he wants to make it to his self-chosen appointment with death.
If we try to imagine someone who never respects recognizable norms—well, it’s hard to imagine, but for one thing they would probably make most “criminally insane” look perfectly normal and safe to be around by contrast.
Upvoted. The events you describe makes sense and your reasoning seems valid. Do you think, based upon any of our discussion, that we disagree on the substance of the issue in any way?
That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible. In a non-zero-sum game of perfect information, there is always a gain to be made by cooperating. Furthermore, it is usually possible to restructure the game so that it is no longer zero-sum.
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him, if it has the information and the power to do so. And, once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors. The game is no longer zero-sum.
So I don’t think that divergent interests are a fatal objection to my scheme. What may be fatal is that real-world games are not typically games with perfect information. Sometimes, in the real world, it is advantageous to lie about your capabilities, values, and intentions. At least advantageous in the short term. Maybe not in the long term.
That is a zero-sum game. (Linear transformations of the payoff matrices don’t change the game.)
It is also a game with only one player. Not really a game at all.
ETA: If you want to allow ‘games’ where only one ‘agent’ can act, then you can probably construct a non-zero-sum example by offering the active player three choices (A, B, and C). If the active player prefers A to B and B to C, and the passive player prefers B to C and C to A, then the game is non-zero-sum since they both prefer B to C.
I suppose there are cases like this in which what I would call the ‘cooperative’ solution can be reached without any cooperation—it is simply the dominant strategy for each active player. (A in the example above). But excluding that possibility, I don’t believe there are counterexamples.
Rather than telling me how my counterexample violates the spirit of what you meant, can you say what you mean more precisely? What you’re saying in 1. and 2. are literally false, even if I kind of (only kind of) see what you’re getting at.
When I make it precise, it is a tautology. Define a “strictly competitive game” as one in which all ‘pure outcomes’ (i.e. results of pure strategies by all players) are Pareto optimal. Then, in any game which is not ‘strictly competitive’, cooperation can result in an outcome that is Pareto optimal—i.e. better for both players than any outcome that can be achieved without cooperation.
The “counter-example” you supplied is ‘strictly competitive’. Some game theory authors take ‘strictly competitive’ to be synonymous with ‘zero sum’. Some, I now learn, do not.
That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible.
I wasn’t arguing that cooperation is impossible. From everything you said there it looks like your understanding of morality is similar to mine:
Agents each judging possible outcomes based upon subjective values and taking actions to try to maximize those values, where the ideal strategy can vary between cooperation, competition, etc.
This makes sense I think when you say:
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him
The members of that society do that because they prefer the outcome in which he does not suicide attack them, to one where he does.
once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors
This phrasing seems exactly right to me. The would-be bomber may elect to cooperate, but only if he feels that his long-term values are best fulfilled in that manor. It is also possible that the bomber will resent his captivity, and if released will try again to attack.
If his utility function assigns (carry out martyrdom operation against the great enemy) an astronomically higher value than his own survival or material comfort, it may be impossible for society to contrive circumstances in which he would agree to long term cooperation.
This sort of morality, where agents negotiate their actions based upon their self-interest and the impact of others actions, until they reach an equilibrium, makes perfect sense to me.
I can see how with that definition of morality it could be sensibly theorized as objective. I don’t think that sentence is true, as there are many people (e.g. suicide bombers) whose evaluations of their long-term interest are significant outliers from other agents.
That’s right, but this exception (people whose interests are served by violating the moral norm) itself has a large exception, which is that throughout most of the suicide bomber’s life, he (rightly) respects the moral norm. Bad people can’t be bad every second of their lives—they have to behave themselves the vast majority of the time if for no other reason than to survive until the next opportunity to be bad. The suicide bomber has no interest in surviving once he presses the button, but for every second of his life prior to that, he has an interest in surviving.
And the would-be eventual suicide bomber also, through most of his life, has no choice but to enforce moral behavior in others if he wants to make it to his self-chosen appointment with death.
If we try to imagine someone who never respects recognizable norms—well, it’s hard to imagine, but for one thing they would probably make most “criminally insane” look perfectly normal and safe to be around by contrast.
Upvoted. The events you describe makes sense and your reasoning seems valid. Do you think, based upon any of our discussion, that we disagree on the substance of the issue in any way?
If so, what part of my map differs from yours?
I’m withholding judgment for now because I’m not sure if or where we differ on any specifics.
That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible. In a non-zero-sum game of perfect information, there is always a gain to be made by cooperating. Furthermore, it is usually possible to restructure the game so that it is no longer zero-sum.
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him, if it has the information and the power to do so. And, once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors. The game is no longer zero-sum.
So I don’t think that divergent interests are a fatal objection to my scheme. What may be fatal is that real-world games are not typically games with perfect information. Sometimes, in the real world, it is advantageous to lie about your capabilities, values, and intentions. At least advantageous in the short term. Maybe not in the long term.
Can’t I construct trivial examples where this is false? E.g. the one-by-two payoff matrices (0,100) and (1,-1).
That is a zero-sum game. (Linear transformations of the payoff matrices don’t change the game.)
It is also a game with only one player. Not really a game at all.
ETA: If you want to allow ‘games’ where only one ‘agent’ can act, then you can probably construct a non-zero-sum example by offering the active player three choices (A, B, and C). If the active player prefers A to B and B to C, and the passive player prefers B to C and C to A, then the game is non-zero-sum since they both prefer B to C.
I suppose there are cases like this in which what I would call the ‘cooperative’ solution can be reached without any cooperation—it is simply the dominant strategy for each active player. (A in the example above). But excluding that possibility, I don’t believe there are counterexamples.
Rather than telling me how my counterexample violates the spirit of what you meant, can you say what you mean more precisely? What you’re saying in 1. and 2. are literally false, even if I kind of (only kind of) see what you’re getting at.
When I make it precise, it is a tautology. Define a “strictly competitive game” as one in which all ‘pure outcomes’ (i.e. results of pure strategies by all players) are Pareto optimal. Then, in any game which is not ‘strictly competitive’, cooperation can result in an outcome that is Pareto optimal—i.e. better for both players than any outcome that can be achieved without cooperation.
The “counter-example” you supplied is ‘strictly competitive’. Some game theory authors take ‘strictly competitive’ to be synonymous with ‘zero sum’. Some, I now learn, do not.
I wasn’t arguing that cooperation is impossible. From everything you said there it looks like your understanding of morality is similar to mine:
Agents each judging possible outcomes based upon subjective values and taking actions to try to maximize those values, where the ideal strategy can vary between cooperation, competition, etc.
This makes sense I think when you say:
The members of that society do that because they prefer the outcome in which he does not suicide attack them, to one where he does.
This phrasing seems exactly right to me. The would-be bomber may elect to cooperate, but only if he feels that his long-term values are best fulfilled in that manor. It is also possible that the bomber will resent his captivity, and if released will try again to attack.
If his utility function assigns (carry out martyrdom operation against the great enemy) an astronomically higher value than his own survival or material comfort, it may be impossible for society to contrive circumstances in which he would agree to long term cooperation.
This sort of morality, where agents negotiate their actions based upon their self-interest and the impact of others actions, until they reach an equilibrium, makes perfect sense to me.