I think your focus on payoffs is diluting your point. In all of your scenarios, the thing enabling a defection is the inability to view another player’s strategy before committing to a strategy. Perhaps you can simplify your definition to the following:
“A defect is when someone (or some sub-coalition) benefits from violating their expected coalition strategy.”
You can define a function that assigns a strategy to every possible coalition. Given an expected coalition strategy C, if the payoff for any sub-coalition strategy SC is greater than their payoff in C, then the sub-coalition SC is incentivized to defect. (Whether that means SC joins a different coalition or forms their own is irrelevant.)
This makes a few things clear that are hidden in your formalization. Specifically:
The main difference between this framing and the framing for Nash Equilibrium is the notion of an expected coalition strategy. Where there is an expected coalition strategy, one should aim to follow a “defection-proof” strategy. Where there is no expected coalition strategy, one should aim to follow a Nash Equilibrium strategy.
Your Proposition 3 is false. You would need a variant that takes coalitions into account.
I believe all of your other theorems and propositions follow from the definition as well.
This has other benefits as well.
It factors the payoff table into two tables that are easier to understand: coalition selection and coalition strategy selection.
It’s better-aligned with intuition. Defection in the colloquial sense is when someone deserts “their” group (i.e., joins a new coalition in violation of the expectation). Coalition selection encodes that notion cleanly. The payoff tables for coalitions cleanly encodes the more generalized notion of “rational action” in scenarios where such defection is possible.
In all of your scenarios, the thing enabling a defection is the inability to view another player’s strategy before committing to a strategy.
Depends what you mean by “committing”. If I move after you in PD, and I can observe your action, and I see you cooperate, then the best response (putting aside Newcomblike issues) is to defect.
“A defect is when someone (or some sub-coalition) benefits from violating their expected coalition strategy.”
This feels quite different from defection. Imagine we’re negotiating resource allocation. The “expected coalition strategy” is, let’s say, “no one gets any”. By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?
Another question: how does this idea differ from the core in cooperative game theory?
The “expected coalition strategy” is, let’s say, “no one gets any”. By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?
In my view, yes. If we agreed that no one should get any resources, then it’s a violation for you to get resources or for you to deceive me into getting resources.
I think the difference is in how the two of us view a strategy. In my view, it’s perfectly acceptable for the coalition strategy to include a clause like “it’s okay to do X if it’s a pareto improvement for our coalition.” If that’s part of the coalition strategy we agree to, then pareto improvements are never defections. If our coalition strategy does exclude unilateral actions that are pareto improvements, then it is a defection to take such actions.
Another question: how does this idea differ from the core in cooperative game theory?
I’m not a mathematician or an economist, my knowledge on this hasn’t been tested, and I just discovered the concept from your reply. Please read the following with a lot of skepticism because I don’t know how correct it is.
Some type differences:
A core is a set of allocations. I’m going to call it core allocations so it’s less confusing.
A defection is a change in strategy (per both of our definitions).
As far as the relationship between the two:
A core allocation satisfies a particular robustness property: it’s stable under coalition refinements. A “coalition refinement” here is an operation with a coalition is replaced by a partition of that coalition. Being stable under coalition refinements, the coalition will not partition itself for rational reasons. So if you have coalitions {A, B} and {C}, then every core allocation is robust against {A, B} splitting up into {A}, {B}.
Defections (per my definition) don’t deal strictly with coalition refinements. If one member leaves a coalition to join another, that’s still a defection. In this scenario, {A, B}, {C} is replaced with {A}, {B, C}. Core allocations don’t deal with this scenario since {A}, {B, C} is not a refinement of {A, B}, {C}. As a result, core allocations are not necessarily robust to defections.
I could be wrong about core allocations being about only refinements. I think I’m safe in saying though that core allocations are robust against some (maybe all) defections.
This is an important disagreement on terminology, and may be a good reason to avoid “cooperate” and “defect” as technical words. They have a much broader meaning than used here.
Whether “defect” is about reducing sum of payouts of the considered participants, or about violating agreements (even with better outcomes), or about some other behavior, use of the word without specification is going to be ambiguous.
The actual post is about payouts. Please keep in mind that these are _not_ resources, but utility. The pain of violating expectations and difficulty in future cooperation is already included (or should be) in those payout numbers.
This can turn into a very long discussion. I’m okay with that, but let me know if you’re not so I can probe only the points that are likely to resolve. I’ll raise the contentious points regardless, but I don’t want to draw focus on them if there’s little motivation to discuss them in depth.
I agree that a split in terminology is warranted, and that “defect” and “cooperate” are poor choices. How about this:
Coalition members may form consensus on the coalition strategy. Members of a coalition may follow the consensus coalition strategy or violate the consensus coalition strategy.
Members of a coalition may benefit the coalition or hurt the coalition.
Benefiting the coalition means raising its payoff regardless of consensus. Hurting the coalition means reducing its payoff regardless of consensus. A coalition may form consensus on the coalition strategy regardless of the optimality of that strategy.
Contentious points:
I expect that treating utility so generally will lead to paradoxes, particularly when utility functions are defined in terms of other utility functions. I think this is an extremely important case, particularly when strategies take trust into account. As a result, I expect that such a general notion of utility will lead to paradoxes when using it to reason about trust.
“Utility is not a resource.” I think this is a useful distinction when trying to clarify goals, but not a useful distinction when trying to make decisions given a set of goals. In particular, once the payoff tables are defined for a game, the goals must already have been defined, and so utility can be treated as a resource in that game.
I’m not sure a long discussion with me is helpful—I mostly wanted to point out that there’s a danger of being misunderstood and talking past each other, and “use more words” is often a better approach than “argue about the words”.
I am especially the wrong person to argue about fundamental utility-aggregation problems. I don’t think ANYONE has a workable theory about how Utilitarianism really works without an appeal to moral realism that I don’t think is justified.
Understood. I do think it’s significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn’t intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.
I think your focus on payoffs is diluting your point. In all of your scenarios, the thing enabling a defection is the inability to view another player’s strategy before committing to a strategy. Perhaps you can simplify your definition to the following:
“A defect is when someone (or some sub-coalition) benefits from violating their expected coalition strategy.”
You can define a function that assigns a strategy to every possible coalition. Given an expected coalition strategy C, if the payoff for any sub-coalition strategy SC is greater than their payoff in C, then the sub-coalition SC is incentivized to defect. (Whether that means SC joins a different coalition or forms their own is irrelevant.)
This makes a few things clear that are hidden in your formalization. Specifically:
The main difference between this framing and the framing for Nash Equilibrium is the notion of an expected coalition strategy. Where there is an expected coalition strategy, one should aim to follow a “defection-proof” strategy. Where there is no expected coalition strategy, one should aim to follow a Nash Equilibrium strategy.
Your Proposition 3 is false. You would need a variant that takes coalitions into account.
I believe all of your other theorems and propositions follow from the definition as well.
This has other benefits as well.
It factors the payoff table into two tables that are easier to understand: coalition selection and coalition strategy selection.
It’s better-aligned with intuition. Defection in the colloquial sense is when someone deserts “their” group (i.e., joins a new coalition in violation of the expectation). Coalition selection encodes that notion cleanly. The payoff tables for coalitions cleanly encodes the more generalized notion of “rational action” in scenarios where such defection is possible.
Depends what you mean by “committing”. If I move after you in PD, and I can observe your action, and I see you cooperate, then the best response (putting aside Newcomblike issues) is to defect.
This feels quite different from defection. Imagine we’re negotiating resource allocation. The “expected coalition strategy” is, let’s say, “no one gets any”. By this definition, is it a defection to then propose an even allocation of resources (a Pareto improvement)?
Another question: how does this idea differ from the core in cooperative game theory?
In my view, yes. If we agreed that no one should get any resources, then it’s a violation for you to get resources or for you to deceive me into getting resources.
I think the difference is in how the two of us view a strategy. In my view, it’s perfectly acceptable for the coalition strategy to include a clause like “it’s okay to do X if it’s a pareto improvement for our coalition.” If that’s part of the coalition strategy we agree to, then pareto improvements are never defections. If our coalition strategy does exclude unilateral actions that are pareto improvements, then it is a defection to take such actions.
I’m not a mathematician or an economist, my knowledge on this hasn’t been tested, and I just discovered the concept from your reply. Please read the following with a lot of skepticism because I don’t know how correct it is.
Some type differences:
A core is a set of allocations. I’m going to call it core allocations so it’s less confusing.
A defection is a change in strategy (per both of our definitions).
As far as the relationship between the two:
A core allocation satisfies a particular robustness property: it’s stable under coalition refinements. A “coalition refinement” here is an operation with a coalition is replaced by a partition of that coalition. Being stable under coalition refinements, the coalition will not partition itself for rational reasons. So if you have coalitions {A, B} and {C}, then every core allocation is robust against {A, B} splitting up into {A}, {B}.
Defections (per my definition) don’t deal strictly with coalition refinements. If one member leaves a coalition to join another, that’s still a defection. In this scenario, {A, B}, {C} is replaced with {A}, {B, C}. Core allocations don’t deal with this scenario since {A}, {B, C} is not a refinement of {A, B}, {C}. As a result, core allocations are not necessarily robust to defections.
I could be wrong about core allocations being about only refinements. I think I’m safe in saying though that core allocations are robust against some (maybe all) defections.
This is an important disagreement on terminology, and may be a good reason to avoid “cooperate” and “defect” as technical words. They have a much broader meaning than used here.
Whether “defect” is about reducing sum of payouts of the considered participants, or about violating agreements (even with better outcomes), or about some other behavior, use of the word without specification is going to be ambiguous.
The actual post is about payouts. Please keep in mind that these are _not_ resources, but utility. The pain of violating expectations and difficulty in future cooperation is already included (or should be) in those payout numbers.
This can turn into a very long discussion. I’m okay with that, but let me know if you’re not so I can probe only the points that are likely to resolve. I’ll raise the contentious points regardless, but I don’t want to draw focus on them if there’s little motivation to discuss them in depth.
I agree that a split in terminology is warranted, and that “defect” and “cooperate” are poor choices. How about this:
Coalition members may form consensus on the coalition strategy. Members of a coalition may follow the consensus coalition strategy or violate the consensus coalition strategy.
Members of a coalition may benefit the coalition or hurt the coalition.
Benefiting the coalition means raising its payoff regardless of consensus. Hurting the coalition means reducing its payoff regardless of consensus. A coalition may form consensus on the coalition strategy regardless of the optimality of that strategy.
Contentious points:
I expect that treating utility so generally will lead to paradoxes, particularly when utility functions are defined in terms of other utility functions. I think this is an extremely important case, particularly when strategies take trust into account. As a result, I expect that such a general notion of utility will lead to paradoxes when using it to reason about trust.
“Utility is not a resource.” I think this is a useful distinction when trying to clarify goals, but not a useful distinction when trying to make decisions given a set of goals. In particular, once the payoff tables are defined for a game, the goals must already have been defined, and so utility can be treated as a resource in that game.
I’m not sure a long discussion with me is helpful—I mostly wanted to point out that there’s a danger of being misunderstood and talking past each other, and “use more words” is often a better approach than “argue about the words”.
I am especially the wrong person to argue about fundamental utility-aggregation problems. I don’t think ANYONE has a workable theory about how Utilitarianism really works without an appeal to moral realism that I don’t think is justified.
Understood. I do think it’s significant though (and worth pointing out) that a much simpler definition yields all of the same interesting consequences. I didn’t intend to just disagree for the sake of getting clearer terminology. I wanted to point out that there seems to be a simpler path to the same answers, and that simpler path provides a new concept that seems to be quite useful.