Generals can secretly DM each other, while keeping up appearances in the shared channels
If a general believes that all of their communication with their team will leak, we’re be back to a unilateralist’s curse situation: if a general thinks they should nuke, obviously they shouldn’t say that to their team, so maybe they nuke unilaterally
(Not obvious whether this is an infohazard)
[Probably some true arguments about the payoff matrix and game theory increase P(mutual destruction). Also some false arguments about game theory — but maybe an infohazard warning makes those less likely to be posted too.]
(Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.)
I don’t think this is very relevant to real-world infohazards, because this is a game with explicit rules and because in the real world the low-hanging infohazards have been shared, but it seems relevant to mechanism design.
Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.
I thought the launch codes were just 000000, as in the example message ben sent out. Also, I think I remember seeing that code in the petrov day LessWrong code.
(1) is not an infohazard because it is too obvious. The generals noticed it instantly, judging from the top of the diplomatic channel. (2) is relatively obvious. It appears to me that the generals noticed it instantly, though the first specific reference to private messages comes later. These principles are learned at school age. Making them common knowledge, known to be known, allows collaboration based on that common knowledge, and collaboration is how y’all avoided getting nuked.
To the extent that (3) is true, it would be prevented by common knowledge of (2). Also I think it’s false, a general can avoid Unilateralist’s Curse here by listening to what other people say (in war room, diplomatic channel, and public discussion) and weighing that fairly before acting, potentially getting advice from family and friends. Probably this is the type of concern that can be defused by making it public. It would be bad if a general privately believed (3) and therefore nuked unilaterally.
(4) is too vague for my purposes here.
I agree that “I’m a general and I don’t know my launch code” is a possible infohazard if posted publicly. I would have shared the knowledge with my team to reduce the risk of reduced deterrence in the possible world where LessWrong admins mistakenly only sent launch codes to one side, taking note of (1) and (2) in how I shared it.
I don’t think this is relevant to real-world infohazards, but I think it’s relevant to building and testing transferrable infohazard skills. People who believe they have been or will be exposed to existential infohazards should build and test their skills in safer environments.
Since the game is over perhaps you can share? This could be good practice in evaluating infohazard skills.
I think I was thinking:
The war room transcripts will leak publicly
Generals can secretly DM each other, while keeping up appearances in the shared channels
If a general believes that all of their communication with their team will leak, we’re be back to a unilateralist’s curse situation: if a general thinks they should nuke, obviously they shouldn’t say that to their team, so maybe they nuke unilaterally
(Not obvious whether this is an infohazard)
[Probably some true arguments about the payoff matrix and game theory increase P(mutual destruction). Also some false arguments about game theory — but maybe an infohazard warning makes those less likely to be posted too.]
(Also after I became a general I observed that I didn’t know what my “launch code” was; I was hoping the LW team forgot to give everyone launch codes and this decreased P(nukes); saying this would would cause everyone to know their launch codes and maybe scare the other team.)
I don’t think this is very relevant to real-world infohazards, because this is a game with explicit rules and because in the real world the low-hanging infohazards have been shared, but it seems relevant to mechanism design.
I thought the launch codes were just 000000, as in the example message ben sent out. Also, I think I remember seeing that code in the petrov day LessWrong code.
(1) is not an infohazard because it is too obvious. The generals noticed it instantly, judging from the top of the diplomatic channel. (2) is relatively obvious. It appears to me that the generals noticed it instantly, though the first specific reference to private messages comes later. These principles are learned at school age. Making them common knowledge, known to be known, allows collaboration based on that common knowledge, and collaboration is how y’all avoided getting nuked.
To the extent that (3) is true, it would be prevented by common knowledge of (2). Also I think it’s false, a general can avoid Unilateralist’s Curse here by listening to what other people say (in war room, diplomatic channel, and public discussion) and weighing that fairly before acting, potentially getting advice from family and friends. Probably this is the type of concern that can be defused by making it public. It would be bad if a general privately believed (3) and therefore nuked unilaterally.
(4) is too vague for my purposes here.
I agree that “I’m a general and I don’t know my launch code” is a possible infohazard if posted publicly. I would have shared the knowledge with my team to reduce the risk of reduced deterrence in the possible world where LessWrong admins mistakenly only sent launch codes to one side, taking note of (1) and (2) in how I shared it.
I don’t think this is relevant to real-world infohazards, but I think it’s relevant to building and testing transferrable infohazard skills. People who believe they have been or will be exposed to existential infohazards should build and test their skills in safer environments.