Some true observations are infohazards, making destruction more likely. Please think carefully before posting observations. Even if you feel clever. You can post hashes here instead to later reveal how clever you were, if you need.
I assume that this is primarily directed at me for this comment, but if so, I strongly disagree.
Security by obscurity does not in fact work well. I do not think it is realistic to hope that none of the ten generals look at the incentives they’ve been given and notice that their reward for nuking is 3x their penalty for being nuked. I do think it’s realistic to make sure it is common knowledge that the generals’ incentives are drastically misaligned with the citizens’ incentives, and to try to do something about that.
(Honestly I think that I disagree with almost all uses of the word ‘infohazard’ on LW. I enjoy SCP stories as much as the next LW-er, but I think that the real-world prevalence of infohazards is orders of magnitude lower).
No. I noticed ~2 more subtle infohazards and I was wishing for nobody to post them and I realized I can decrease that probability by making an infohazard warning.
I ask that you refrain from being the reason that security-by-obscurity fails, if you notice subtle infohazards.
Generals can secretly DM each other, while keeping up appearances in the shared channels
If a general believes that all of their communication with their team will leak, we’re be back to a unilateralist’s curse situation: if a general thinks they should nuke, obviously they shouldn’t say that to their team, so maybe they nuke unilaterally
(Not obvious whether this is an infohazard)
[Probably some true arguments about the payoff matrix and game theory increase P(mutual destruction). Also some false arguments about game theory — but maybe an infohazard warning makes those less likely to be posted too.]
(Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.)
I don’t think this is very relevant to real-world infohazards, because this is a game with explicit rules and because in the real world the low-hanging infohazards have been shared, but it seems relevant to mechanism design.
Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.
I thought the launch codes were just 000000, as in the example message ben sent out. Also, I think I remember seeing that code in the petrov day LessWrong code.
In my very limited experience (which is mostly board games with some social situations thrown in), attempts to obscure publically discernible information to influence other people’s actions are often extremely counter-productive. If you don’t give people the full picture, then the most likely case is not that they discover nothing, but that they discover half the picture. And you don’t know in advance which half. This makes them extremely unpredictable. You want them to pick A in preference to B, but the half-picture they get drives them to pick C which is massively worse for everyone.
In board games I have played, if a slightly prisoner’s dilemma like situation arises, you are much more likely to get stung by someone who has either misunderstood the rules or has misunderstood the equilibrium than someone who knows what is going on. [As a concrete example, in the game Scyth a new player believed that they got mission completion points for each military victory, not just the first one. As they had already scored a victory another played reasoned they wouldn’t make a pointless attack. But they did make the pointless attack. It set them and their target back, giving the two players not involved in that battle a relative advantage.]
Some true observations are infohazards, making destruction more likely. Please think carefully before posting observations. Even if you feel clever. You can post hashes here instead to later reveal how clever you were, if you need.
LOOSE LIPS SINK SHIPS
I assume that this is primarily directed at me for this comment, but if so, I strongly disagree.
Security by obscurity does not in fact work well. I do not think it is realistic to hope that none of the ten generals look at the incentives they’ve been given and notice that their reward for nuking is 3x their penalty for being nuked. I do think it’s realistic to make sure it is common knowledge that the generals’ incentives are drastically misaligned with the citizens’ incentives, and to try to do something about that.
(Honestly I think that I disagree with almost all uses of the word ‘infohazard’ on LW. I enjoy SCP stories as much as the next LW-er, but I think that the real-world prevalence of infohazards is orders of magnitude lower).
No. I noticed ~2 more subtle infohazards and I was wishing for nobody to post them and I realized I can decrease that probability by making an infohazard warning.
I ask that you refrain from being the reason that security-by-obscurity fails, if you notice subtle infohazards.
Since the game is over perhaps you can share? This could be good practice in evaluating infohazard skills.
I think I was thinking:
The war room transcripts will leak publicly
Generals can secretly DM each other, while keeping up appearances in the shared channels
If a general believes that all of their communication with their team will leak, we’re be back to a unilateralist’s curse situation: if a general thinks they should nuke, obviously they shouldn’t say that to their team, so maybe they nuke unilaterally
(Not obvious whether this is an infohazard)
[Probably some true arguments about the payoff matrix and game theory increase P(mutual destruction). Also some false arguments about game theory — but maybe an infohazard warning makes those less likely to be posted too.]
(Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.)
I don’t think this is very relevant to real-world infohazards, because this is a game with explicit rules and because in the real world the low-hanging infohazards have been shared, but it seems relevant to mechanism design.
I thought the launch codes were just 000000, as in the example message ben sent out. Also, I think I remember seeing that code in the petrov day LessWrong code.
I agree with this.
In my very limited experience (which is mostly board games with some social situations thrown in), attempts to obscure publically discernible information to influence other people’s actions are often extremely counter-productive. If you don’t give people the full picture, then the most likely case is not that they discover nothing, but that they discover half the picture. And you don’t know in advance which half. This makes them extremely unpredictable. You want them to pick A in preference to B, but the half-picture they get drives them to pick C which is massively worse for everyone.
In board games I have played, if a slightly prisoner’s dilemma like situation arises, you are much more likely to get stung by someone who has either misunderstood the rules or has misunderstood the equilibrium than someone who knows what is going on. [As a concrete example, in the game Scyth a new player believed that they got mission completion points for each military victory, not just the first one. As they had already scored a victory another played reasoned they wouldn’t make a pointless attack. But they did make the pointless attack. It set them and their target back, giving the two players not involved in that battle a relative advantage.]
“The best swordsman does not fear the second best, he fears the worst since there’s no telling what that idiot is going to do.” [https://freakonomics.com/2011/10/rules-of-the-game/#:~:text=%E2%80%9CThe%20best%20swordsman%20does%20not,can%20beat%20smartness%20and%20foresight%3F]
This best swordsman wants more people to know how to sword fight, not fewer.