Good example. I’ll use this to talk about what I think is the right way to think about this.
First things first: true zero-sum games are ridiculously rare in the real world. There’s always some way to achieve mutual gains—even if it’s just “avoid mutual losses” (as in e.g. mutual assured destruction). Of course, that does not mean that an enemy can be trusted to keep a deal. As with any deal, it’s not a good deal if we don’t expect the enemy to keep it.
The mutual gains do have to be real in order for “working with monsters” to make sense.
That said… I think people tend to have a gut-level desire to not work with monsters. This cashes out as motivated stopping: someone thinks “ah, but I can’t really trust the enemy to uphold their end of the deal, can I?”… and they use that as an excuse to not make any deal at all, without actually considering (a) whether there is actually any evidence that the enemy is likely to break the deal (e.g. track record), (b) whether it would actually be in the enemy’s interest to break the deal, or (c) whether the deal can be structured so that the enemy has no incentive to break it. People just sort of horns-effect, and assume the Bad Person will of course break a deal because that would be Bad.
(There’s a similar thing with reputational effects, which I expect someone will also bring up at some point. Reputational effects are real and need to be taken into consideration when thinking about whether a deal is actually net-positive-expected-value. But I think people tend to say “ah, but dealing with this person will ruin my reputation”… then use that as an excuse to not make a deal, without considering (a) how highly-visible/salient this deal actually is to others, (b) how much reputational damage is actually likely, or (c) whether the deal can plausibly be kept secret.)
true zero-sum games are ridiculously rare in the real world. There’s always some way to achieve mutual gains—even if it’s just “avoid mutual losses”
I disagree.
I think you’re underestimating how deep value differences can be, and how those values play into everything a person does. Countries with nuclear weapons who have opposing interests are actively trying to destroy each other without destroying themselves in the process, and if you’re curious about the failures of MAD, I’d suggest reading The Doomsday Machine, by Daniel Ellsberg. If that book is to be taken as mostly true, and the MWI is to be taken as true, then I suspect that many, many worlds were destroyed by nuclear missiles. When I found this unintuitive, I spent a day thinking about quantum suicide to build that intuition: most instances of all of us are dead because we relied on MAD. We’re having this experience now where I’m writing this comment and you’re reading it because everything that can happen will happen in some branch of the multiverse, meaning our existance is only weak evidence for the efficacy of MAD, and all of those very close calls are stronger evidence for our destruction in other branches. This doesn’t mean we’re in the magic branch where MAD works, it means we’ve gotten lucky so far. Our futures are infinite split branches of parallel mes and yous, and in most of those where we rely on strategies like MAD, we die.
...
Scissor statements reveal pre-existing differences in values, they don’t create them. There really are people out there who have values that result in them doing terrible things. Furthermore, beliefs and values aren’t just clothes we wear—we act on them, and live by them. So it’s reasonable to assume that if someone has a particularly heinous belief, and particularly heinous values, that they act on those beliefs and values.
In the ssc short story, scissor statements are used to tear apart mozambique, and in real life, we see propagandists using scissor statements to split up activist coalitions. It’s not hypothetical, divide and conquer is a useful strategy that has been used probably since the dawn of time. But not all divides are created equal.
In the 1300s in rural France, peasants revolted against the enclosure of the commons, and since many of these revolts were led by women, the nascent state officials focused their efforts on driving a (false) wedge between men and women, accusing those women of being witches & followers of satan. Scissor statements (from what I can tell) are similar in that they’re a tactic used to split up a coalition, but different in that they’re not inventing conflict. It doesn’t seem to make much of a difference in terms of outcome (conflict) once people have sorted themselves into opposing groups, but equating the two is a mistake. You’re losing something real if you ally yourself with someone you’re not value-aligned with, and you’re not losing something real if you’re allying yourself with someone you are value-aligned with, but mistakenly think is your enemy. The amount of power people like you with your value has loses strength because now another group that wants to destroy you has more power.
If two groups form a coalition, and gorup_A values “biscuits for all,” and group_B values “cookies for all,” and someone tries to start a fight between them based on this language difference, it would be tragic for them to fight. Because it should be obvious that what they want is the same thing, they’re just using different language to talk about it. And if they team up, group_A won’t be tempted to deny group_B cookies, because they deep-down value cookies for all, including group_B. It’s baked into their decision making process.
(And if they decide that what they want to spend all their time doing is argue over whether they should call their baked food product “cookies” or “biscuits,” then what they actually value is arguing about pedantry, not “cookies for all.”)
But in a counter example, if group_A values “biscuits for all” and group_B values “all biscuits for group_B,” then group_B will find it very available and easy to think of strategies which result in biscuits for group_B and not group_A. If someone is having trouble imagining this, that may be because it’s difficult to imagine someone only wanting the cookies for themselves, so they assume the other group wouldn’t defect, because “cookies for all? What’s so controversial about that?” Except group_B fundamentally doesn’t want group_A getting their biscuits, so any attempt at cooperation is going to be a mess, because group_A has to keep double-checking to make sure group_B is really cooperating, because it’s just so intuitive to group_B not to that they’ll have trouble avoiding it. And so giving group_B power is like giving someone power when you know they’re later going to use it to hurt you and take your biscuits.
And group_B will, because they value group_B having all the biscuits, and have a hard time imagining that anyone would actually want everyone to have all the biscuits, unless they’re lying or virtue signalling or something. And they’ll push and push because it’ll seem like you’re just faking.
...
I find the way people respond to scissor statements (“don’t bring that up, it’s a scissor statement/divisive!”) benefits only the status quo. And if the status quo benefits some group of people, then of course that group is going to eschew divisiveness.
...
To bring it back to the Spanish Civil War, the communists were willing to ally themselves with big businesses, businesses who were also funding the fascists. They may have told themselves it was a means to an end, and for all I know (because my knowledge of the Spanish Civil War is limited only to a couple books,) the communists may have been planning to betray those big business interests, in the end. But in the mean time, they advanced the causes of those big business interests, and undermined the people who stood against everything the fascists fought for. It’s difficult to say what would’ve happened if the anarchists had tried a gambit to force the hand of big business to pick a side (communist or fascist) or simply ignored the communists’ demands. But big business interests were more supportive of Franco winning (because he was good for business), and their demands of the communists in exchange for money weakened the communists’ position, and because the communists twisted the arms of the anarchists & the anarchists went along with it, this weakened their position, too. And in the end, the only groups that benefitted from that sacrifice were big business interests and Franco’s fascists.
...
whether the deal can plausibly be kept secret.
That’s a crapshoot, especially in the modern day. Creating situations where groups need to keep secrets in order to function is the kind of strategy Julian Assange used to cripple government efficiency. The correct tactic is to keep as few secrets from your allies as you can, because if you’re actually allies, then you’ll benefit from the shared information.
The effectiveness or ineffectiveness of MAD as a strategy is not actually relevant to whether nuclear war is or is not a zero-sum game. That’s purely a question of payoffs and preferences, not strategy.
You’re losing something real if you ally yourself with someone you’re not value-aligned with, and you’re not losing something real if you’re allying yourself with someone you are value-aligned with, but mistakenly think is your enemy. The amount of power people like you with your value has loses strength because now another group that wants to destroy you has more power.
The last sentence of this paragraph highlights the assumption: you are assuming, without argument, that the game is zero-sum. That gains in power for another group that wants to destroy you is necessarily worse for you.
This assumption fails most dramatically in the case of three or more players. For instance, in your example of the Spanish civil war, it’s entirely plausible that the anarchist-communist alliance was the anarchists’ best bet—i.e. they honestly preferred the communists over the fascists, the fascists wanted to destroy them even more than the communists, and an attempt at kingmaking was only choice the anarchists actually had the power to make. In that world, fighting everyone would have seen them lose without any chance of gains at all.
In general, the key feature of a two-player zero-sum game is that anything which is better for your opponent is necessarily worse for you, so there is no incentive to cooperate. But this cannot ever hold between all three players in a three-way game: if “better for player 1” implies both “worse for player 2″ and “worse for player 3”, then player 2 and player 3 are incentivized to cooperate against player 1. Three player games always incentivize cooperation between at least some players (except in the trivial case where there’s no interaction at all between some of the players). Likewise in games with more than three players. Two-player games are a weird special case.
That all remains true even if all three+ players hate each other and want to destroy each other.
But in a counter example, if group_A values “biscuits for all” and group_B values “all biscuits for group_B,” then group_B will find it very available and easy to think of strategies which result in biscuits for group_B and not group_A. If someone is having trouble imagining this, that may be because it’s difficult to imagine someone only wanting the cookies for themselves, so they assume the other group wouldn’t defect, because “cookies for all? What’s so controversial about that?” Except group_B fundamentally doesn’t want group_A getting their biscuits, so any attempt at cooperation is going to be a mess, because group_A has to keep double-checking to make sure group_B is really cooperating, because it’s just so intuitive to group_B not to that they’ll have trouble avoiding it. And so giving group_B power is like giving someone power when you know they’re later going to use it to hurt you and take your biscuits.
Note that, in this example, you aren’t even trying to argue that there’s no potential for mutual gains. Your actual argument is not that the game is zero-sum, but rather that there is overhead to enforcing a deal.
It’s important to flag this, because it’s exactly the sort of reasoning which is prone to motivated stopping. Overhead and lack of trust are exactly the problems which can be circumvented by clever mechanism design or clever strategies, but the mechanisms/strategies are often nonobvious.
That gains in power for another group that wants to destroy you is necessarily worse for you.
Yes. In many real-life scenarios, this is true. In small games where the rules are blatant, it’s easier to tell if someone is breaking an agreement or trying to subvert you, so model games aren’t necessarily indicative of real-world conditions. For a real life example, look at the US’s decision to fund religious groups to fight communists in the middle east. If someone wants to destroy you, during the alliance they’ll work secretly to subvert you, and after the alliance is over, they’ll use whatever new powers they have gained to try to destroy you.
People make compromises that sacrifice things intrinsic to their stated beliefs when they believe it is inevitable they’ll lose — by making the “best bet” they were revealing that they weren’t trying to win, that they’ve utterly given up on winning. The point of anarchy is that there is no king. For an anarchist to be a kingmaker is for an anarchist to give up on anarchy.
And from a moral standpoint, what about the situation where someone is asked to work with a rapist, pedophile, or serial killer? We’re talking about heinous beliefs/actions here, things that would make someone a monster, not mundane “this person uses ruby and I use python,” disagreements. What if working with a {rapist,pedo,serial killer} means they live to injure and kill another day? If that’s the outcome, by working with them you’re enabling that outcome by enabling them.
Good example. I’ll use this to talk about what I think is the right way to think about this.
First things first: true zero-sum games are ridiculously rare in the real world. There’s always some way to achieve mutual gains—even if it’s just “avoid mutual losses” (as in e.g. mutual assured destruction). Of course, that does not mean that an enemy can be trusted to keep a deal. As with any deal, it’s not a good deal if we don’t expect the enemy to keep it.
The mutual gains do have to be real in order for “working with monsters” to make sense.
That said… I think people tend to have a gut-level desire to not work with monsters. This cashes out as motivated stopping: someone thinks “ah, but I can’t really trust the enemy to uphold their end of the deal, can I?”… and they use that as an excuse to not make any deal at all, without actually considering (a) whether there is actually any evidence that the enemy is likely to break the deal (e.g. track record), (b) whether it would actually be in the enemy’s interest to break the deal, or (c) whether the deal can be structured so that the enemy has no incentive to break it. People just sort of horns-effect, and assume the Bad Person will of course break a deal because that would be Bad.
(There’s a similar thing with reputational effects, which I expect someone will also bring up at some point. Reputational effects are real and need to be taken into consideration when thinking about whether a deal is actually net-positive-expected-value. But I think people tend to say “ah, but dealing with this person will ruin my reputation”… then use that as an excuse to not make a deal, without considering (a) how highly-visible/salient this deal actually is to others, (b) how much reputational damage is actually likely, or (c) whether the deal can plausibly be kept secret.)
I disagree.
I think you’re underestimating how deep value differences can be, and how those values play into everything a person does. Countries with nuclear weapons who have opposing interests are actively trying to destroy each other without destroying themselves in the process, and if you’re curious about the failures of MAD, I’d suggest reading The Doomsday Machine, by Daniel Ellsberg. If that book is to be taken as mostly true, and the MWI is to be taken as true, then I suspect that many, many worlds were destroyed by nuclear missiles. When I found this unintuitive, I spent a day thinking about quantum suicide to build that intuition: most instances of all of us are dead because we relied on MAD. We’re having this experience now where I’m writing this comment and you’re reading it because everything that can happen will happen in some branch of the multiverse, meaning our existance is only weak evidence for the efficacy of MAD, and all of those very close calls are stronger evidence for our destruction in other branches. This doesn’t mean we’re in the magic branch where MAD works, it means we’ve gotten lucky so far. Our futures are infinite split branches of parallel mes and yous, and in most of those where we rely on strategies like MAD, we die.
...
Scissor statements reveal pre-existing differences in values, they don’t create them. There really are people out there who have values that result in them doing terrible things. Furthermore, beliefs and values aren’t just clothes we wear—we act on them, and live by them. So it’s reasonable to assume that if someone has a particularly heinous belief, and particularly heinous values, that they act on those beliefs and values.
In the ssc short story, scissor statements are used to tear apart mozambique, and in real life, we see propagandists using scissor statements to split up activist coalitions. It’s not hypothetical, divide and conquer is a useful strategy that has been used probably since the dawn of time. But not all divides are created equal.
In the 1300s in rural France, peasants revolted against the enclosure of the commons, and since many of these revolts were led by women, the nascent state officials focused their efforts on driving a (false) wedge between men and women, accusing those women of being witches & followers of satan. Scissor statements (from what I can tell) are similar in that they’re a tactic used to split up a coalition, but different in that they’re not inventing conflict. It doesn’t seem to make much of a difference in terms of outcome (conflict) once people have sorted themselves into opposing groups, but equating the two is a mistake. You’re losing something real if you ally yourself with someone you’re not value-aligned with, and you’re not losing something real if you’re allying yourself with someone you are value-aligned with, but mistakenly think is your enemy. The amount of power people like you with your value has loses strength because now another group that wants to destroy you has more power.
If two groups form a coalition, and gorup_A values “biscuits for all,” and group_B values “cookies for all,” and someone tries to start a fight between them based on this language difference, it would be tragic for them to fight. Because it should be obvious that what they want is the same thing, they’re just using different language to talk about it. And if they team up, group_A won’t be tempted to deny group_B cookies, because they deep-down value cookies for all, including group_B. It’s baked into their decision making process.
(And if they decide that what they want to spend all their time doing is argue over whether they should call their baked food product “cookies” or “biscuits,” then what they actually value is arguing about pedantry, not “cookies for all.”)
But in a counter example, if group_A values “biscuits for all” and group_B values “all biscuits for group_B,” then group_B will find it very available and easy to think of strategies which result in biscuits for group_B and not group_A. If someone is having trouble imagining this, that may be because it’s difficult to imagine someone only wanting the cookies for themselves, so they assume the other group wouldn’t defect, because “cookies for all? What’s so controversial about that?” Except group_B fundamentally doesn’t want group_A getting their biscuits, so any attempt at cooperation is going to be a mess, because group_A has to keep double-checking to make sure group_B is really cooperating, because it’s just so intuitive to group_B not to that they’ll have trouble avoiding it. And so giving group_B power is like giving someone power when you know they’re later going to use it to hurt you and take your biscuits.
And group_B will, because they value group_B having all the biscuits, and have a hard time imagining that anyone would actually want everyone to have all the biscuits, unless they’re lying or virtue signalling or something. And they’ll push and push because it’ll seem like you’re just faking.
...
I find the way people respond to scissor statements (“don’t bring that up, it’s a scissor statement/divisive!”) benefits only the status quo. And if the status quo benefits some group of people, then of course that group is going to eschew divisiveness.
...
To bring it back to the Spanish Civil War, the communists were willing to ally themselves with big businesses, businesses who were also funding the fascists. They may have told themselves it was a means to an end, and for all I know (because my knowledge of the Spanish Civil War is limited only to a couple books,) the communists may have been planning to betray those big business interests, in the end. But in the mean time, they advanced the causes of those big business interests, and undermined the people who stood against everything the fascists fought for. It’s difficult to say what would’ve happened if the anarchists had tried a gambit to force the hand of big business to pick a side (communist or fascist) or simply ignored the communists’ demands. But big business interests were more supportive of Franco winning (because he was good for business), and their demands of the communists in exchange for money weakened the communists’ position, and because the communists twisted the arms of the anarchists & the anarchists went along with it, this weakened their position, too. And in the end, the only groups that benefitted from that sacrifice were big business interests and Franco’s fascists.
...
That’s a crapshoot, especially in the modern day. Creating situations where groups need to keep secrets in order to function is the kind of strategy Julian Assange used to cripple government efficiency. The correct tactic is to keep as few secrets from your allies as you can, because if you’re actually allies, then you’ll benefit from the shared information.
The effectiveness or ineffectiveness of MAD as a strategy is not actually relevant to whether nuclear war is or is not a zero-sum game. That’s purely a question of payoffs and preferences, not strategy.
The last sentence of this paragraph highlights the assumption: you are assuming, without argument, that the game is zero-sum. That gains in power for another group that wants to destroy you is necessarily worse for you.
This assumption fails most dramatically in the case of three or more players. For instance, in your example of the Spanish civil war, it’s entirely plausible that the anarchist-communist alliance was the anarchists’ best bet—i.e. they honestly preferred the communists over the fascists, the fascists wanted to destroy them even more than the communists, and an attempt at kingmaking was only choice the anarchists actually had the power to make. In that world, fighting everyone would have seen them lose without any chance of gains at all.
In general, the key feature of a two-player zero-sum game is that anything which is better for your opponent is necessarily worse for you, so there is no incentive to cooperate. But this cannot ever hold between all three players in a three-way game: if “better for player 1” implies both “worse for player 2″ and “worse for player 3”, then player 2 and player 3 are incentivized to cooperate against player 1. Three player games always incentivize cooperation between at least some players (except in the trivial case where there’s no interaction at all between some of the players). Likewise in games with more than three players. Two-player games are a weird special case.
That all remains true even if all three+ players hate each other and want to destroy each other.
Note that, in this example, you aren’t even trying to argue that there’s no potential for mutual gains. Your actual argument is not that the game is zero-sum, but rather that there is overhead to enforcing a deal.
It’s important to flag this, because it’s exactly the sort of reasoning which is prone to motivated stopping. Overhead and lack of trust are exactly the problems which can be circumvented by clever mechanism design or clever strategies, but the mechanisms/strategies are often nonobvious.
Yes. In many real-life scenarios, this is true. In small games where the rules are blatant, it’s easier to tell if someone is breaking an agreement or trying to subvert you, so model games aren’t necessarily indicative of real-world conditions. For a real life example, look at the US’s decision to fund religious groups to fight communists in the middle east. If someone wants to destroy you, during the alliance they’ll work secretly to subvert you, and after the alliance is over, they’ll use whatever new powers they have gained to try to destroy you.
People make compromises that sacrifice things intrinsic to their stated beliefs when they believe it is inevitable they’ll lose — by making the “best bet” they were revealing that they weren’t trying to win, that they’ve utterly given up on winning. The point of anarchy is that there is no king. For an anarchist to be a kingmaker is for an anarchist to give up on anarchy.
And from a moral standpoint, what about the situation where someone is asked to work with a rapist, pedophile, or serial killer? We’re talking about heinous beliefs/actions here, things that would make someone a monster, not mundane “this person uses ruby and I use python,” disagreements. What if working with a {rapist,pedo,serial killer} means they live to injure and kill another day? If that’s the outcome, by working with them you’re enabling that outcome by enabling them.
On the contrary, it highlights no such thing.*
*You may argue that this is the case with regard to that ‘assumption’ - but you have not proved it.
This need not be the case, for the argument to be correct:
yes—and this is so even if the game isn’t zero sum.