I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree.
One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing—“what will harm this group? What will not harm this group?”—in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example
I might say “I’m okay with abstract examples involving nukes—for example “spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm”. (Spreading related knowledge may also enable nuclear reactors which may be useful ‘environmentally’ and on, say, missions to mars—high (usable) energy density per unit of weight may be an important metric when there’s a high cost associated with weight.)”
Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.
One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing—“what will harm this group? What will not harm this group?”—in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example
I might say “I’m okay with abstract examples involving nukes—for example “spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm”. (Spreading related knowledge may also enable nuclear reactors which may be useful ‘environmentally’ and on, say, missions to mars—high (usable) energy density per unit of weight may be an important metric when there’s a high cost associated with weight.)”
Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.