Some true observations are infohazards, making destruction more likely. Please think carefully before posting observations. Even if you feel clever. You can post hashes here instead to later reveal how clever you were, if you need.
I assume that this is primarily directed at me for this comment, but if so, I strongly disagree.
Security by obscurity does not in fact work well. I do not think it is realistic to hope that none of the ten generals look at the incentives they’ve been given and notice that their reward for nuking is 3x their penalty for being nuked. I do think it’s realistic to make sure it is common knowledge that the generals’ incentives are drastically misaligned with the citizens’ incentives, and to try to do something about that.
(Honestly I think that I disagree with almost all uses of the word ‘infohazard’ on LW. I enjoy SCP stories as much as the next LW-er, but I think that the real-world prevalence of infohazards is orders of magnitude lower).
No. I noticed ~2 more subtle infohazards and I was wishing for nobody to post them and I realized I can decrease that probability by making an infohazard warning.
I ask that you refrain from being the reason that security-by-obscurity fails, if you notice subtle infohazards.
Generals can secretly DM each other, while keeping up appearances in the shared channels
If a general believes that all of their communication with their team will leak, we’re be back to a unilateralist’s curse situation: if a general thinks they should nuke, obviously they shouldn’t say that to their team, so maybe they nuke unilaterally
(Not obvious whether this is an infohazard)
[Probably some true arguments about the payoff matrix and game theory increase P(mutual destruction). Also some false arguments about game theory — but maybe an infohazard warning makes those less likely to be posted too.]
(Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.)
I don’t think this is very relevant to real-world infohazards, because this is a game with explicit rules and because in the real world the low-hanging infohazards have been shared, but it seems relevant to mechanism design.
Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.
I thought the launch codes were just 000000, as in the example message ben sent out. Also, I think I remember seeing that code in the petrov day LessWrong code.
(1) is not an infohazard because it is too obvious. The generals noticed it instantly, judging from the top of the diplomatic channel. (2) is relatively obvious. It appears to me that the generals noticed it instantly, though the first specific reference to private messages comes later. These principles are learned at school age. Making them common knowledge, known to be known, allows collaboration based on that common knowledge, and collaboration is how y’all avoided getting nuked.
To the extent that (3) is true, it would be prevented by common knowledge of (2). Also I think it’s false, a general can avoid Unilateralist’s Curse here by listening to what other people say (in war room, diplomatic channel, and public discussion) and weighing that fairly before acting, potentially getting advice from family and friends. Probably this is the type of concern that can be defused by making it public. It would be bad if a general privately believed (3) and therefore nuked unilaterally.
(4) is too vague for my purposes here.
I agree that “I’m a general and I don’t know my launch code” is a possible infohazard if posted publicly. I would have shared the knowledge with my team to reduce the risk of reduced deterrence in the possible world where LessWrong admins mistakenly only sent launch codes to one side, taking note of (1) and (2) in how I shared it.
I don’t think this is relevant to real-world infohazards, but I think it’s relevant to building and testing transferrable infohazard skills. People who believe they have been or will be exposed to existential infohazards should build and test their skills in safer environments.
In my very limited experience (which is mostly board games with some social situations thrown in), attempts to obscure publically discernible information to influence other people’s actions are often extremely counter-productive. If you don’t give people the full picture, then the most likely case is not that they discover nothing, but that they discover half the picture. And you don’t know in advance which half. This makes them extremely unpredictable. You want them to pick A in preference to B, but the half-picture they get drives them to pick C which is massively worse for everyone.
In board games I have played, if a slightly prisoner’s dilemma like situation arises, you are much more likely to get stung by someone who has either misunderstood the rules or has misunderstood the equilibrium than someone who knows what is going on. [As a concrete example, in the game Scyth a new player believed that they got mission completion points for each military victory, not just the first one. As they had already scored a victory another played reasoned they wouldn’t make a pointless attack. But they did make the pointless attack. It set them and their target back, giving the two players not involved in that battle a relative advantage.]
Some true observations are infohazards, making destruction more likely. Please think carefully before posting observations. Even if you feel clever. You can post hashes here instead to later reveal how clever you were, if you need.
LOOSE LIPS SINK SHIPS
I assume that this is primarily directed at me for this comment, but if so, I strongly disagree.
Security by obscurity does not in fact work well. I do not think it is realistic to hope that none of the ten generals look at the incentives they’ve been given and notice that their reward for nuking is 3x their penalty for being nuked. I do think it’s realistic to make sure it is common knowledge that the generals’ incentives are drastically misaligned with the citizens’ incentives, and to try to do something about that.
(Honestly I think that I disagree with almost all uses of the word ‘infohazard’ on LW. I enjoy SCP stories as much as the next LW-er, but I think that the real-world prevalence of infohazards is orders of magnitude lower).
No. I noticed ~2 more subtle infohazards and I was wishing for nobody to post them and I realized I can decrease that probability by making an infohazard warning.
I ask that you refrain from being the reason that security-by-obscurity fails, if you notice subtle infohazards.
Since the game is over perhaps you can share? This could be good practice in evaluating infohazard skills.
I think I was thinking:
The war room transcripts will leak publicly
Generals can secretly DM each other, while keeping up appearances in the shared channels
If a general believes that all of their communication with their team will leak, we’re be back to a unilateralist’s curse situation: if a general thinks they should nuke, obviously they shouldn’t say that to their team, so maybe they nuke unilaterally
(Not obvious whether this is an infohazard)
[Probably some true arguments about the payoff matrix and game theory increase P(mutual destruction). Also some false arguments about game theory — but maybe an infohazard warning makes those less likely to be posted too.]
(Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.)
I don’t think this is very relevant to real-world infohazards, because this is a game with explicit rules and because in the real world the low-hanging infohazards have been shared, but it seems relevant to mechanism design.
I thought the launch codes were just 000000, as in the example message ben sent out. Also, I think I remember seeing that code in the petrov day LessWrong code.
(1) is not an infohazard because it is too obvious. The generals noticed it instantly, judging from the top of the diplomatic channel. (2) is relatively obvious. It appears to me that the generals noticed it instantly, though the first specific reference to private messages comes later. These principles are learned at school age. Making them common knowledge, known to be known, allows collaboration based on that common knowledge, and collaboration is how y’all avoided getting nuked.
To the extent that (3) is true, it would be prevented by common knowledge of (2). Also I think it’s false, a general can avoid Unilateralist’s Curse here by listening to what other people say (in war room, diplomatic channel, and public discussion) and weighing that fairly before acting, potentially getting advice from family and friends. Probably this is the type of concern that can be defused by making it public. It would be bad if a general privately believed (3) and therefore nuked unilaterally.
(4) is too vague for my purposes here.
I agree that “I’m a general and I don’t know my launch code” is a possible infohazard if posted publicly. I would have shared the knowledge with my team to reduce the risk of reduced deterrence in the possible world where LessWrong admins mistakenly only sent launch codes to one side, taking note of (1) and (2) in how I shared it.
I don’t think this is relevant to real-world infohazards, but I think it’s relevant to building and testing transferrable infohazard skills. People who believe they have been or will be exposed to existential infohazards should build and test their skills in safer environments.
I agree with this.
In my very limited experience (which is mostly board games with some social situations thrown in), attempts to obscure publically discernible information to influence other people’s actions are often extremely counter-productive. If you don’t give people the full picture, then the most likely case is not that they discover nothing, but that they discover half the picture. And you don’t know in advance which half. This makes them extremely unpredictable. You want them to pick A in preference to B, but the half-picture they get drives them to pick C which is massively worse for everyone.
In board games I have played, if a slightly prisoner’s dilemma like situation arises, you are much more likely to get stung by someone who has either misunderstood the rules or has misunderstood the equilibrium than someone who knows what is going on. [As a concrete example, in the game Scyth a new player believed that they got mission completion points for each military victory, not just the first one. As they had already scored a victory another played reasoned they wouldn’t make a pointless attack. But they did make the pointless attack. It set them and their target back, giving the two players not involved in that battle a relative advantage.]
“The best swordsman does not fear the second best, he fears the worst since there’s no telling what that idiot is going to do.” [https://freakonomics.com/2011/10/rules-of-the-game/#:~:text=%E2%80%9CThe%20best%20swordsman%20does%20not,can%20beat%20smartness%20and%20foresight%3F]
This best swordsman wants more people to know how to sword fight, not fewer.