The general way to get around the infohazard is to use either historic examples or examples of other cultures and societal contexts.
You can say that saying “Democracy is good” can cause harm because saying it can motivated people to act politically in totalitarian societies in which they will get punished for it.
Butterflies famously can cause a lot of harm by flapping their wings as well. That seems to be a way words can cause harm as well.
I think most people can agree that in both of those examples the actions can lead to causal chains that result in harm. The problem is that you play motte and bailey and equate “words can be lead to causal chains that produce harm” with “interlocutors should be more careful with their words”. That leads to avoiding any of the cruxes that might come up with “interlocutors should be more careful with their words”.
I generally think of “harm should be avoided whenever possible” as morally foundational. (Although it certainly isn’t the only possible basis for a moral system, it seems really common). If “words can lead to causal chains that produce harm”, then it follows directly that “interlocutors should be careful with their words so as to avoid accidental harm”, does it not? I’ll own that I didn’t make that link explicitly, though. Thanks for pointing out the gap (and the blind spot).
As for the motte and bailey, I’m not sure where you’re getting that. In the introduction, I lay out the argument I’m defending against clearly, and you can see it repeated elsewhere in the comments. When I state that we should be more careful with our words, it is met with “words can’t cause harm, that would be magic”.
I originally had a longer comment, but I’m afraid of getting embroiled in this, so here’s a short-ish comment instead. Also, I recognize that there’s more interpretive labor I could do here, but I figure it’s better to say something non-optimal than to say nothing.
I’m guessing you don’t mean “harm should be avoided whenever possible” literally. Here’s why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I’m guessing you don’t want to say that. (Related is the discussion of the “paralysis argument” in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)
I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we’re already at roughly the optimal level of risk, then it’s not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there’s always some risk isn’t enough to argue that interlocutors should be more careful—you also have to argue that the current norms don’t prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.
[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I’m reading him right, is making a different argument, and saying that your original argument doesn’t get us all the way from “words can cause harm” to “interlocutors should be more careful with their words.”
You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren’t aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:
Words can’t cause harm
Therefore, people don’t need to be careful with their words.
You successfully refute (1) in the post. But this doesn’t get us to “people do need to be careful with their words” since the following sort of argument is also available:
A. Words don’t have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they’re already being.
B. Therefore, people don’t need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]
I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree.
One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing—“what will harm this group? What will not harm this group?”—in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example
I might say “I’m okay with abstract examples involving nukes—for example “spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm”. (Spreading related knowledge may also enable nuclear reactors which may be useful ‘environmentally’ and on, say, missions to mars—high (usable) energy density per unit of weight may be an important metric when there’s a high cost associated with weight.)”
Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.
Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn’t to go all the way to “people need to be more careful with their words” in this post, then fair enough.
I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren’t trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don’t think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of “words that cause harm.” But I think it’s easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren’t explicitly trying to give a full defense of that thesis in this post.)
That’s a really helpful (and, I think, quite correct) observation. I’m not usually quite so careful as all that. This seemed like something it would be really easy to get wrong.
I would say that the weather is probably next-to-never unstable enough for that to actually happen, despite its fame. If I thought otherwise, I would never have even tried to write and post comments, much less essays.
Weather inherently isn’t stable. Wikipedia writes about the butterfly effect: “The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example.”
If I thought otherwise, I would never have even tried to write and post comments, much less essays.
It seems that given the physics of our world works you might have to have to refine your ethical system or by okay with constantly violating it.
Yes, some prediction models are extremely sensitive to initial conditions. But I doubt very much if a flap or even a sneeze can actually be the key thing that determines Hurricane or Not Hurricane in real life. The weather system would have to not only be extremely unstable, but in just the right way for that input to be relevant at such a scale.
You should still be careful with butterflies, though. They’re a bit fragile.
If everything else is constant then a flag is what determines whether a particular hurricane (that exist far enough in the future) happens or not happens. There’s a causal chain between the flag and the hurricane.
If you care about something else besides the causal chain that defines some notion of “key thing” you actually have to say what you mean with “key thing”.
A sneeze can determine much more than hurricane/no hurricane. It can determine the identities of everyone who exists, say, a few hundred years into the future and onwards.
The general way to get around the infohazard is to use either historic examples or examples of other cultures and societal contexts.
You can say that saying “Democracy is good” can cause harm because saying it can motivated people to act politically in totalitarian societies in which they will get punished for it.
Butterflies famously can cause a lot of harm by flapping their wings as well. That seems to be a way words can cause harm as well.
I think most people can agree that in both of those examples the actions can lead to causal chains that result in harm. The problem is that you play motte and bailey and equate “words can be lead to causal chains that produce harm” with “interlocutors should be more careful with their words”. That leads to avoiding any of the cruxes that might come up with “interlocutors should be more careful with their words”.
I generally think of “harm should be avoided whenever possible” as morally foundational. (Although it certainly isn’t the only possible basis for a moral system, it seems really common). If “words can lead to causal chains that produce harm”, then it follows directly that “interlocutors should be careful with their words so as to avoid accidental harm”, does it not? I’ll own that I didn’t make that link explicitly, though. Thanks for pointing out the gap (and the blind spot).
As for the motte and bailey, I’m not sure where you’re getting that. In the introduction, I lay out the argument I’m defending against clearly, and you can see it repeated elsewhere in the comments. When I state that we should be more careful with our words, it is met with “words can’t cause harm, that would be magic”.
I originally had a longer comment, but I’m afraid of getting embroiled in this, so here’s a short-ish comment instead. Also, I recognize that there’s more interpretive labor I could do here, but I figure it’s better to say something non-optimal than to say nothing.
I’m guessing you don’t mean “harm should be avoided whenever possible” literally. Here’s why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I’m guessing you don’t want to say that. (Related is the discussion of the “paralysis argument” in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)
I think this is part of what’s behind Christian’s comment. If we don’t want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we’re already at roughly the optimal level of risk, then it’s not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there’s always some risk isn’t enough to argue that interlocutors should be more careful—you also have to argue that the current norms don’t prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.
[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I’m reading him right, is making a different argument, and saying that your original argument doesn’t get us all the way from “words can cause harm” to “interlocutors should be more careful with their words.”
You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren’t aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:
Words can’t cause harm
Therefore, people don’t need to be careful with their words.
You successfully refute (1) in the post. But this doesn’t get us to “people do need to be careful with their words” since the following sort of argument is also available:
A. Words don’t have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they’re already being.
B. Therefore, people don’t need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]
One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing—“what will harm this group? What will not harm this group?”—in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example
I might say “I’m okay with abstract examples involving nukes—for example “spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm”. (Spreading related knowledge may also enable nuclear reactors which may be useful ‘environmentally’ and on, say, missions to mars—high (usable) energy density per unit of weight may be an important metric when there’s a high cost associated with weight.)”
Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.
Yes, I’d agree with all that. My goal was to counter the argument that words can’t cause harm. I keep seeing that argument in the wild.
Thanks for helping to clarify!
Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn’t to go all the way to “people need to be more careful with their words” in this post, then fair enough.
I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren’t trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don’t think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of “words that cause harm.” But I think it’s easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren’t explicitly trying to give a full defense of that thesis in this post.)
That’s a really helpful (and, I think, quite correct) observation. I’m not usually quite so careful as all that. This seemed like something it would be really easy to get wrong.
By your logic I should be careful when interacting with butterflies because of the hurricanes they cause through causal chains.
I would say that the weather is probably next-to-never unstable enough for that to actually happen, despite its fame. If I thought otherwise, I would never have even tried to write and post comments, much less essays.
Weather inherently isn’t stable. Wikipedia writes about the butterfly effect: “The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example.”
It seems that given the physics of our world works you might have to have to refine your ethical system or by okay with constantly violating it.
Yes, some prediction models are extremely sensitive to initial conditions. But I doubt very much if a flap or even a sneeze can actually be the key thing that determines Hurricane or Not Hurricane in real life. The weather system would have to not only be extremely unstable, but in just the right way for that input to be relevant at such a scale.
You should still be careful with butterflies, though. They’re a bit fragile.
If everything else is constant then a flag is what determines whether a particular hurricane (that exist far enough in the future) happens or not happens. There’s a causal chain between the flag and the hurricane.
If you care about something else besides the causal chain that defines some notion of “key thing” you actually have to say what you mean with “key thing”.
A sneeze can determine much more than hurricane/no hurricane. It can determine the identities of everyone who exists, say, a few hundred years into the future and onwards.
If you’re not already familiar, this argument gets made all the time in debates about “consequentialist cluelessness”. This gets discussed, among other places, in this interview with Hilary Greaves: https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/. It’s also related to the paralysis argument I mentioned in my other comment.