Another way this matters: Offense takers largely get their intuitions about “will taking offense achieve my goals” from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate “will taking offense specifically against LessWrong achieve my goals”, but most actors simply aren’t paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn’t care much about, like, I don’t know, fricking Sargon of Akkad.
I agree that offense-takers are calibrated against Society-in-general, not particular targets.
As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business’s files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn’t help the copy of you in this timeline)?
It’s a tough call! If your business’s files are sufficiently important, then I can definitely see why you’d want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that’s not an option …
our behavior [...] punishment against us [...] some other entity that we shouldn’t care much about
If coordinating to resist extortion isn’t an option, that makes me very interested in trying to minimize the extent to which there is a collective “us”. “We” should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other’s throats fighting over who owns the “rationalist” brand name.
Another way this matters: Offense takers largely get their intuitions about “will taking offense achieve my goals” from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate “will taking offense specifically against LessWrong achieve my goals”, but most actors simply aren’t paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn’t care much about, like, I don’t know, fricking Sargon of Akkad.
I agree that offense-takers are calibrated against Society-in-general, not particular targets.
As a less-political problem with similar structure, consider ransomware attacks. If an attacker encrypts your business’s files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn’t help the copy of you in this timeline)?
It’s a tough call! If your business’s files are sufficiently important, then I can definitely see why you’d want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments. But if that’s not an option …
If coordinating to resist extortion isn’t an option, that makes me very interested in trying to minimize the extent to which there is a collective “us”. “We” should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other’s throats fighting over who owns the “rationalist” brand name.