I still haven’t read a good steelman of conflict theory(and I doubt I can offer one, though I’ll try), but I think it’s a bit deeper than just misguided mistake theory.
First of all, conflict theory seems a better fit for 0-sum games:
slaves want to be free, slaveholders want to keep their property
mongols want territory, current inhabitants don’t want to become skull pyramids
privileged group wants disadvantaged group policed ruthlessly, since they are not affected by the police brutality but do benefit from the police catching more criminals on the margin. In other words police false positives don’t affect the privileged group, but false negatives do.
It’s a sort of unstated rationalist dogma that all 0-sum games can sort of be twisted into positive-sum games. I don’t think that’s true in principle, but it’s certainly not true given practical constraints(people are fallible, defect from agreements etc).
Conflict theory might also function better in practice, even though it has worse theoretical upside. For instance, worker unions are not an optimal solution to managers abusing employees, lack of profit sharing, lack of workplace safety rules, they do solve a coordination problem(in particular an imbalance in workers’ ability to coordinate vs managers). In an ideal world managers understand that unions are net bad and take measures to alleviate worker concerns so that they don’t unionise, thus creating a somewhat redundant and sometimes adversarial parallel management structure.
But a simple, comprehensible conflict theory based solution might be stable, whereas the ideal mistake theory solution might be plagued by defection risks or just super hard to implement.
A lot of mistake theory also seems to suffer from conceptual explosion. Like there are no real sides, every person has their own interests, we need to maximise utility for individuals that actually exist not statistical average individuals. Which is true, but also incredibly hard to turn into a workable social contract vs here’s 5 categories, their votes create an average statistical avatar, please those 5 avatars.
Finally, when people run on different axioms, you can’t really use mistake theory, you’re essentially in an ideological 0-sum game. If I believe in Biblical literalism, young earth creationism, there’s no meaningful compromise to have with Darwinist evolution. I can surrender my axioms, and essentially become a new person, or not. The mistake theory answer to this is probably something like… people don’t really care about axioms that much they care about the myriad outcomes(like you’re a YEC because you like the community of church or you don’t want to argue with your parents). But I just don’t think that’s true. People seem to care a lot more about their axioms and core theories than about end-outcomes and often do weird causal origami to help maintain their axioms in the face of apparently undesirable consequences.
Sorry, seemed to have gone on the tangent and not actually addressed the post enough. Here’s a short comment directly on the post:
I don’t think socialists are merely misguided social democrats. I think they fundamentally make different predictions than social democrats. They don’t think a socialist state will devolve into soviet or Chinese style horrors, therefore they see milque-toast socialism as unnecessarily compromising with capitalist exploitation. There’s a, somewhat, plausible defence of that position: Soviet Russia and communist china weren’t radically different from their pre-communist forms, just moreso. You wouldn’t expect socialist USA or UK to suddenly open up gulags and have the secret police hunting people down. You’d probably expect it to be a local homeowners association or tedious government bureaucracy writ large. Statist socialism doesn’t really transform society as much as it would like to believe, just look at how Russia has reverted to a tsarist/aristocrat system and how China is reverting to a pseudo-imperial bureaucracy.
I wrote a defence of conflict theory here, in case that’s of interest. (Also crossposted to LessWrong here). It has some similarities to your 0-sum/positive-sum framing (which I like) but more focused on historical examples.
It’s a sort of unstated rationalist dogma that all 0-sum games can sort of be twisted into positive-sum games.
In the formal maths of game theory, a zero sum game is one where one players utility is precisely minus the other players utility. This is a very special case and almost never happens in the real world. The alternative is a non-zero sum game, utilities are isomorphic up to scaling and adding a constant. Take the slave and slave owner game. If you add a third option where they both kill each other, then both parties prefer the other two states over both killing each other. The game is no longer zero sum. That doesn’t stop it being a conflict in the sense that both parties want to take actions that harm the other. It just isn’t pure 100% conflict.
The post was about the differences between object-level and evangelistic mistake theory. The examples I gave are just illustrative of that difference. They’re not meant as particularly strong arguments in themselves; just examples of two different lines of thinking.
I still haven’t read a good steelman of conflict theory(and I doubt I can offer one, though I’ll try), but I think it’s a bit deeper than just misguided mistake theory.
First of all, conflict theory seems a better fit for 0-sum games:
slaves want to be free, slaveholders want to keep their property
mongols want territory, current inhabitants don’t want to become skull pyramids
privileged group wants disadvantaged group policed ruthlessly, since they are not affected by the police brutality but do benefit from the police catching more criminals on the margin. In other words police false positives don’t affect the privileged group, but false negatives do.
It’s a sort of unstated rationalist dogma that all 0-sum games can sort of be twisted into positive-sum games. I don’t think that’s true in principle, but it’s certainly not true given practical constraints(people are fallible, defect from agreements etc).
Conflict theory might also function better in practice, even though it has worse theoretical upside. For instance, worker unions are not an optimal solution to managers abusing employees, lack of profit sharing, lack of workplace safety rules, they do solve a coordination problem(in particular an imbalance in workers’ ability to coordinate vs managers). In an ideal world managers understand that unions are net bad and take measures to alleviate worker concerns so that they don’t unionise, thus creating a somewhat redundant and sometimes adversarial parallel management structure.
But a simple, comprehensible conflict theory based solution might be stable, whereas the ideal mistake theory solution might be plagued by defection risks or just super hard to implement.
A lot of mistake theory also seems to suffer from conceptual explosion. Like there are no real sides, every person has their own interests, we need to maximise utility for individuals that actually exist not statistical average individuals. Which is true, but also incredibly hard to turn into a workable social contract vs here’s 5 categories, their votes create an average statistical avatar, please those 5 avatars.
Finally, when people run on different axioms, you can’t really use mistake theory, you’re essentially in an ideological 0-sum game. If I believe in Biblical literalism, young earth creationism, there’s no meaningful compromise to have with Darwinist evolution. I can surrender my axioms, and essentially become a new person, or not. The mistake theory answer to this is probably something like… people don’t really care about axioms that much they care about the myriad outcomes(like you’re a YEC because you like the community of church or you don’t want to argue with your parents). But I just don’t think that’s true. People seem to care a lot more about their axioms and core theories than about end-outcomes and often do weird causal origami to help maintain their axioms in the face of apparently undesirable consequences.
Sorry, seemed to have gone on the tangent and not actually addressed the post enough. Here’s a short comment directly on the post:
I don’t think socialists are merely misguided social democrats. I think they fundamentally make different predictions than social democrats. They don’t think a socialist state will devolve into soviet or Chinese style horrors, therefore they see milque-toast socialism as unnecessarily compromising with capitalist exploitation. There’s a, somewhat, plausible defence of that position: Soviet Russia and communist china weren’t radically different from their pre-communist forms, just moreso. You wouldn’t expect socialist USA or UK to suddenly open up gulags and have the secret police hunting people down. You’d probably expect it to be a local homeowners association or tedious government bureaucracy writ large. Statist socialism doesn’t really transform society as much as it would like to believe, just look at how Russia has reverted to a tsarist/aristocrat system and how China is reverting to a pseudo-imperial bureaucracy.
I wrote a defence of conflict theory here, in case that’s of interest. (Also crossposted to LessWrong here). It has some similarities to your 0-sum/positive-sum framing (which I like) but more focused on historical examples.
This is great, just the kind of conflict theory defense that I wanted to read. Thank you.
In the formal maths of game theory, a zero sum game is one where one players utility is precisely minus the other players utility. This is a very special case and almost never happens in the real world. The alternative is a non-zero sum game, utilities are isomorphic up to scaling and adding a constant. Take the slave and slave owner game. If you add a third option where they both kill each other, then both parties prefer the other two states over both killing each other. The game is no longer zero sum. That doesn’t stop it being a conflict in the sense that both parties want to take actions that harm the other. It just isn’t pure 100% conflict.
The post was about the differences between object-level and evangelistic mistake theory. The examples I gave are just illustrative of that difference. They’re not meant as particularly strong arguments in themselves; just examples of two different lines of thinking.