If someone’s claiming “topic X is dangerous to talk about, and I’m not even going to try to convince you of the abstract decision theory implying this, because this decision theory is dangerous to talk about”, I’m not going to believe them, because that’s frankly absurd.
It’s possible to make abstract arguments that don’t reveal particular technical details, such as by referring to historical cases, or talking about hypothetical situations.
It’s also possible for Alice to convince Bob that some info is dangerous by giving the info to Carol, who is trusted by both Alice and Bob, after which Carol tells Bob how dangerous the info is.
If Alice isn’t willing to do any of these things, fine, there’s a possible but highly unlikely world where she’s right, and she takes a reputation hit due to the “unlikely” part of that sentence.
(Note, the alternative hypothesis isn’t just direct selfishness; what’s more likely is cliquish inner ring dynamics)
If someone’s claiming “topic X is dangerous to talk about, and I’m not even going to try to convince you of the abstract decision theory implying this, because this decision theory is dangerous to talk about”, I’m not going to believe them, because that’s frankly absurd.
It’s possible to make abstract arguments that don’t reveal particular technical details, such as by referring to historical cases, or talking about hypothetical situations.
It’s also possible for Alice to convince Bob that some info is dangerous by giving the info to Carol, who is trusted by both Alice and Bob, after which Carol tells Bob how dangerous the info is.
If Alice isn’t willing to do any of these things, fine, there’s a possible but highly unlikely world where she’s right, and she takes a reputation hit due to the “unlikely” part of that sentence.
(Note, the alternative hypothesis isn’t just direct selfishness; what’s more likely is cliquish inner ring dynamics)