If everyone lies for their preferred cause, those who see through the lies trust no one, and those who don’t see through them act on false information.
If everyone believes enemies of their preferred cause should be driven out of society, as many societies as causes arise, and none can so much as trade with another.
If everyone believes their opponents must be purged, everyone purges everyone else.
If everyone decides they must win by the sword, the Hobbesian state of nature results.
(Oh, hell, first I realize Kant was not an idiot, and now I realize Hobbes was not an idiot. Of course the state of nature is ahistorical—that’s part of the point!)
Breaking down the Schelling fence around the social norms built up over the state of nature is an effective way to gain power, but once you gain power, you have to make sure that the fence is restored—and that’s hard to do. It’s easier to destroy than to build. You can’t weigh winning by the sword against the status quo; you have to weigh one action with p probability of winning hard enough to restore the fence (and q probability of having your burning of the accumulated arrangements / store of knowledge be net-beneficial for whatever definition of ‘net-beneficial’ you’re using, vs. 1-q probability of having them not be) and 1-p probability of just breaking the fence.
In reality, of course, fence-wreckers understand that the opposite side wants to preserve the fence, and use that to their advantage. (The rain it raineth on the just / And also on the unjust fella / But chiefly on the just, because / The unjust hath the just’s umbrella.) Alinsky understood this: you appeal to moral principles when you’re out of power, but if you get power, you crush the people who appeal to moral principles—even the ones you espoused before you go power. If you have enough power to crush your opponents, you have enough power to crush your opponents—but this represents… not quite a burning of capital, but a sick sort of investment which may or may not pay off. You may be able to crush your opponents, but it doesn’t necessarily follow that your opponents aren’t able to crush you. And if you crush your opponents, you can’t use them, their skills, knowledge, etc.
This is the part where I attempt to avoid performing an amygdala hijack by using the phrase ‘amygdala hijack’, and reference Atlas Shrugged: the moochers crush the capitalists, so the capitalists leave, and the moochers don’t have access to the benefits of their talents anymore so their society falls apart. It’s not a perfect analogy—it’s been a while since I read it, but I don’t think the moochers saw themselves as aligned against the capitalists. But it’s close enough; if it helps, imagine they were Communists.
There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you’re so strongly arguing for defecting from.
There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you’re so strongly arguing for defecting from.
In the consequentialist ethics family, there’s act consequentialism, rule consequentialism, and a concept I cannot recall the name of linked here or possibly written here long ago of what I will call winning consequentialism. It dictates you consider every action according to every possible consequentialism and you pick the one with the best consequences.
I think it was called plus-consequentialism in the post, or maybe n-consequentialism, but it seems to capture this.
But your failure lies in assuming that winning consequentialism will always result in this sort of clean outcome. Less Wrong attempts to change the world not by the sword, or by emotional appeals, not even base electoralism, but by comments on the Internet. Is it really the case that this is always the winning outcome?
An experiment: Suppose you find yourself engaged in a struggle (any struggle) where you correctly apply winning consequentialism considering all contexts and cooperation norms and find that you should crush your enemy. What do you then do?
Your consequentialism sounds suspiciously like the opposite and I wonder how deeply you are committed to it.
If everyone lies for their preferred cause, those who see through the lies trust no one, and those who don’t see through them act on false information.
If everyone believes enemies of their preferred cause should be driven out of society, as many societies as causes arise, and none can so much as trade with another.
If everyone believes their opponents must be purged, everyone purges everyone else.
If everyone decides they must win by the sword, the Hobbesian state of nature results.
(Oh, hell, first I realize Kant was not an idiot, and now I realize Hobbes was not an idiot. Of course the state of nature is ahistorical—that’s part of the point!)
Breaking down the Schelling fence around the social norms built up over the state of nature is an effective way to gain power, but once you gain power, you have to make sure that the fence is restored—and that’s hard to do. It’s easier to destroy than to build. You can’t weigh winning by the sword against the status quo; you have to weigh one action with p probability of winning hard enough to restore the fence (and q probability of having your burning of the accumulated arrangements / store of knowledge be net-beneficial for whatever definition of ‘net-beneficial’ you’re using, vs. 1-q probability of having them not be) and 1-p probability of just breaking the fence.
In reality, of course, fence-wreckers understand that the opposite side wants to preserve the fence, and use that to their advantage. (The rain it raineth on the just / And also on the unjust fella / But chiefly on the just, because / The unjust hath the just’s umbrella.) Alinsky understood this: you appeal to moral principles when you’re out of power, but if you get power, you crush the people who appeal to moral principles—even the ones you espoused before you go power. If you have enough power to crush your opponents, you have enough power to crush your opponents—but this represents… not quite a burning of capital, but a sick sort of investment which may or may not pay off. You may be able to crush your opponents, but it doesn’t necessarily follow that your opponents aren’t able to crush you. And if you crush your opponents, you can’t use them, their skills, knowledge, etc.
This is the part where I attempt to avoid performing an amygdala hijack by using the phrase ‘amygdala hijack’, and reference Atlas Shrugged: the moochers crush the capitalists, so the capitalists leave, and the moochers don’t have access to the benefits of their talents anymore so their society falls apart. It’s not a perfect analogy—it’s been a while since I read it, but I don’t think the moochers saw themselves as aligned against the capitalists. But it’s close enough; if it helps, imagine they were Communists.
There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you’re so strongly arguing for defecting from.
In the consequentialist ethics family, there’s act consequentialism, rule consequentialism, and a concept I cannot recall the name of linked here or possibly written here long ago of what I will call winning consequentialism. It dictates you consider every action according to every possible consequentialism and you pick the one with the best consequences.
I think it was called plus-consequentialism in the post, or maybe n-consequentialism, but it seems to capture this.
But your failure lies in assuming that winning consequentialism will always result in this sort of clean outcome. Less Wrong attempts to change the world not by the sword, or by emotional appeals, not even base electoralism, but by comments on the Internet. Is it really the case that this is always the winning outcome?
An experiment: Suppose you find yourself engaged in a struggle (any struggle) where you correctly apply winning consequentialism considering all contexts and cooperation norms and find that you should crush your enemy. What do you then do?
Your consequentialism sounds suspiciously like the opposite and I wonder how deeply you are committed to it.