I originally had a longer comment, but I’m afraid of getting embroiled in this, so here’s a short-ish comment instead. Also, I recognize that there’s more interpretive labor I could do here, but I figure it’s better to say something non-optimal than to say nothing.
I’m guessing you don’t mean “harm should be avoided whenever possible” literally. Here’s why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I’m guessing you don’t want to say that. (Related is the discussion of the “paralysis argument” in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)
I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we’re already at roughly the optimal level of risk, then it’s not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there’s always some risk isn’t enough to argue that interlocutors should be more careful—you also have to argue that the current norms don’t prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.
[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I’m reading him right, is making a different argument, and saying that your original argument doesn’t get us all the way from “words can cause harm” to “interlocutors should be more careful with their words.”
You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren’t aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:
Words can’t cause harm
Therefore, people don’t need to be careful with their words.
You successfully refute (1) in the post. But this doesn’t get us to “people do need to be careful with their words” since the following sort of argument is also available:
A. Words don’t have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they’re already being.
B. Therefore, people don’t need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]
I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree.
One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing—“what will harm this group? What will not harm this group?”—in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example
I might say “I’m okay with abstract examples involving nukes—for example “spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm”. (Spreading related knowledge may also enable nuclear reactors which may be useful ‘environmentally’ and on, say, missions to mars—high (usable) energy density per unit of weight may be an important metric when there’s a high cost associated with weight.)”
Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.
Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn’t to go all the way to “people need to be more careful with their words” in this post, then fair enough.
I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren’t trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don’t think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of “words that cause harm.” But I think it’s easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren’t explicitly trying to give a full defense of that thesis in this post.)
That’s a really helpful (and, I think, quite correct) observation. I’m not usually quite so careful as all that. This seemed like something it would be really easy to get wrong.
I originally had a longer comment, but I’m afraid of getting embroiled in this, so here’s a short-ish comment instead. Also, I recognize that there’s more interpretive labor I could do here, but I figure it’s better to say something non-optimal than to say nothing.
I’m guessing you don’t mean “harm should be avoided whenever possible” literally. Here’s why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I’m guessing you don’t want to say that. (Related is the discussion of the “paralysis argument” in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)
I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we’re already at roughly the optimal level of risk, then it’s not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there’s always some risk isn’t enough to argue that interlocutors should be more careful—you also have to argue that the current norms don’t prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.
[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I’m reading him right, is making a different argument, and saying that your original argument doesn’t get us all the way from “words can cause harm” to “interlocutors should be more careful with their words.”
You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren’t aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:
Words can’t cause harm
Therefore, people don’t need to be careful with their words.
You successfully refute (1) in the post. But this doesn’t get us to “people do need to be careful with their words” since the following sort of argument is also available:
A. Words don’t have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they’re already being.
B. Therefore, people don’t need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]
One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing—“what will harm this group? What will not harm this group?”—in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example
I might say “I’m okay with abstract examples involving nukes—for example “spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm”. (Spreading related knowledge may also enable nuclear reactors which may be useful ‘environmentally’ and on, say, missions to mars—high (usable) energy density per unit of weight may be an important metric when there’s a high cost associated with weight.)”
Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.
Yes, I’d agree with all that. My goal was to counter the argument that words can’t cause harm. I keep seeing that argument in the wild.
Thanks for helping to clarify!
Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn’t to go all the way to “people need to be more careful with their words” in this post, then fair enough.
I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren’t trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don’t think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of “words that cause harm.” But I think it’s easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren’t explicitly trying to give a full defense of that thesis in this post.)
That’s a really helpful (and, I think, quite correct) observation. I’m not usually quite so careful as all that. This seemed like something it would be really easy to get wrong.