I’m not sure people can voluntarily self-modify in this way. Even if it’s possible, I don’t think most real people getting offended by real issues are primarily doing this.
Voluntary self-modification also requires a pre-existing desire to self-modify. I wouldn’t take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don’t really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him. The only point at which I would take such a pill is if I already cared enough about the honor of Mohammed that I was willing to die for him. Since people have risked their lives and earned lots of prison time protesting the Mohammed cartoons, even before they started any self-modification they must have had strong feelings about the issue.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you? I think you might be thinking of attempts to create in-group cohesion and signal loyalty by uniting against a common “offensive” enemy, something that I agree is common. But these attempts cannot be phrased in the consequentialist manner I suggested earlier and still work—they depend on a “we are all good, the other guy is all evil” mentality.
Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.
One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop. Even if you don’t like the latter part, I think the advice for the former might still be useful.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
It’s a Schellingian idea: in conflict situations, it is often a rational strategy to pre-commit to act irrationally (i.e. without regards to cost and benefit) unless the opponent yields. The idea in this case is that I’ll self-modify to care about X far more than I initially do, and thus pre-commit to lash out if anyone does it.
If we have a dispute and I credibly signal that I’m going to flip out and create drama out of all proportion to the issue at stake, you’re faced with a choice between conceding to my demands or getting into an unpleasant situation that will cost more than the matter of dispute is worth. I’m sure you can think of many examples where people successfully get the upper hand in disputes using this strategy. The only way to disincentivize such behavior is to pre-commit credibly to be defiant in face of threats of drama. In contrast, if you act like a (naive) utilitarian, you are exceptionally vulnerable to this strategy, since I don’t even need drama to get what I want, if I can self-modify to care tremendously about every single thing I want. (Which I won’t do if I’m a good naive utilitarian myself, but the whole point is that it’s not a stable strategy.)
Now, the key point is that such behavior is usually not consciously manipulative and calculated. On the contrary—someone flipping out and creating drama for a seemingly trivial reason is likely to be under God-honest severe distress, feeling genuine pain of offense and injustice. This is a common pattern in human social behavior: humans are extremely good at detecting faked emotions and conscious manipulation, and as a result, we have evolved so that our brains lash out with honest strong emotion that is nevertheless directed by some module that performs game-theoretic assessment of the situation. This of course prompts strategic responses from others, leading to a strategic arms race without end.
The further crucial point is that these game-theoretic calculators in our brains are usually smart enough to assess whether the flipping out strategy is likely to be successful, given what might be expected in response. Basically, it is a part of the human brain that responds to rational incentives even though it’s not under the control of the conscious mind. With this in mind, you can resolve the seeming contradiction between the sincerity of the pain of offense and the fact that it responds to rational incentives.
All this is somewhat complicated when we consider issues of group conflict rather than individual conflict, but the same basic principles apply.
The question is better phrased by asking what will be the practical consequences of treating an offense as legitimate and ceasing the offending action (and perhaps also apologizing) versus treating it as illegitimate and standing your ground (and perhaps even escalating). Clearly, this is a difficult question of great practical value in life, and like every such question, it’s impossible to give a simple and universally applicable answer. (And of course, even if you know the answer in some concrete situation, you’ll need extraordinary composure and self-control to apply it if it’s contrary to your instinctive reaction.)
Tentatively—game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.
However, there’s another sort of breakdown of negotiations that just occurred to me. If A asks for less than they want because they think that’s all they can get and/or they’re trying to do a utilitarian calculation, they aren’t going to be happy even if they get it. This means they’re likely to push for more even if they get it, and then they start looking like a utility monster.
Tentatively—game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.
What do you mean by “satiated”?
From a utilitarian/consequentialist point of view, a desire being “satiated” simply means that the marginal utility gains from pursuing it further are less than opportunity cost of however much effort it takes.
Note that by this definition when a desire is satiated depends on how easy it is to pursue.
If you’re hungry you might feel as though you could just keep eating and eating. However, if enough food is available, you’ll stop and hit a point where more food would make you feel worse instead of better. You’ll get hungry again, but part of the cycle includes satiation. For purposes of discussion, I’m talking about most people here, not those with eating disorders or unusual metabolisms that affect their ability to feel satiety.
I think most people have a limit on their desire for status, though that might be more like the situation you describe. Few would turn down a chance to be the world’s Dictator for Life, but they’ve hit a point where trying for more status than they’ve got seems like too much trouble.
Voluntary self-modification also requires a pre-existing desire to self-modify.
People have motives to increase their status, so we can check this box. Of course, this depends on phenotype, and some people do this much more than others.
I wouldn’t take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don’t really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him.
You can’t self-modify to an arbitrary belief, but you can self-modify towards other beliefs that are close to yours in belief space. See my comment about political writers. You can seek out political leaders, political groups, or even just friends, with beliefs slightly more radical than yours along a certain dimension (and you might be inspired to do so with just small exposure to them). Over time, your beliefs may shift.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group. When you get mad about stuff and complain about it, you feel like you are accomplishing something.
Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.
The problem is that other people only care if you are with them or against them; they don’t care about your calculation.
The second problem is that it can be hard to distinguish these two things. People who have a sufficiently valid beef might be justified in making blame-based demands to stop offending, and your demand that they sound “respectful” and “reasonable” is itself unreasonable. Of course, people without a valid beef will use this exact same reasoning about why you can’t make a “tone argument” against them asking for them to sound more respectful and reasonable.
There might be a correlation between offense and the “validity” of the underlying issue, but this correlation is low enough that it can be hard to predict the validity of the underlying issue from how the offense reaction is expressed, which weakens the utility of the strategy you propose for identifying beefs.
However, your strategy might be useful as a Schelling Point for what sort of demands you’ll accept from others.
One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop.
It may have been tough to get the message, because the British salmon example is hypothetical. A real-world example of some group succeeding in claims of offensive might be useful.
Okay. I formally admit I’m wrong about the “should usually stop offensive behavior” thing (or, rather, I don’t know if I’m wrong but I formally admit my previous arguments for thinking I was right no longer move me and I now recognize I am confused.)
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
“Request to change” is low status, while “demand to change” is high status. The whole point of taking offense is that some part of your brain detects a threat to your status or an opportunity to increase status, so how can it be “better” to act low status when you feel offended? Well, it may be better if you think you should dis-identify with that part of your brain, and believe that even if some part of your brain cares a lot about status, the real you don’t. But you have to make that case, or state that as an assumption, which you haven’t, as far as I can tell (although I haven’t carefully read this whole discussion).
Here’s an example in case the above isn’t clear. Suppose I’m the king of some medieval country, and one of my subjects publicly addresses me without kneeling or call me “your majesty”. Is it better for me to request him to do so in the language of harm-minimization (“I’m hurt that you don’t consider me majestic”?), or to make a demand phrased in the language of offense?
It would be much better for you to make a request in the language of harm-minimization. If you do that sort of thing often, then it may so damage the aura of divine right (or whatever superstition your monarchy rests on) in that country that your descendants will never again be able to perpetrate the sort of crimes that your ancestors committed with impunity.
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
I see at least two huge problems with the harm-minimization approach.
First, it requires interpersonal comparison of harm, which can make sense in very drastic cases (e.g. one person getting killed versus another getting slightly inconvenienced), but it usually makes no sense in controversial disputes such as these.
Second, even if we can agree on the way to compare harm interpersonally, the game-theoretic concerns discussed in this thread clearly show that naive case-by-case harm minimization is unsound, since any case-by-case consequences of decisions can be overshadowed by the implications of the wider incentives and signals they provide. This can lead to incredibly complicated and non-obvious issues, where the law of unintended consequences lurks behind every corner. I have yet to see any consequentialists even begin to grapple with this problem convincingly, on this issue or any other.
We may be talking at cross-purposes. Are you arguing that if someone says something I find offensive, it is more productive for me to respond in the form of “You are a bad person for saying that and I demand an apology?” than “I’m sorry, but I was really hurt by your statement and I request you not make it again”?
It depends; there is no universal rule. Either response could be more appropriate in different cases. There are situations where if someone’s statements overstep certain lines, the rational response is to deem this a hostile act and demand an apology with the threat of escalation. There are also situations where it makes sense to ask people to refrain from hurtful statements, since the hurt is non-strategic.
Also, what exactly do you mean by “productive”? People’s interests may be fundamentally opposed, and it may be that the response that better serves the strategic interest of one party can do this only at the other’s expense, with neither of them being in the right in any objective sense.
Maybe the most productive variant is just to ignore the offender/offence?
On a slightly unrelated note, one psychologist I know has demonstrated me that sometimes it’s more useful to agree with offence on the spot, whatever it is, and just continue with conversation. So I think in some situations this too may be a viable option.
To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group.
So I can raise the status of my group by becoming a frequent complainer and encouraging my fellows to do likewise?
I won’t say that it never happens. I will say that the success prospects of that sort of strategy have been exaggerated of late.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
Surely there are a great many reasons other than offense why, for various different things X, it might be (or seem) useful to me to stop you from doing thing X. For example, if thing X is “mocking my beliefs”: if my beliefs are widely respected, I and people like me will have a larger share of influence than if my beliefs are widely mocked.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
Status games. There’s a satirical blog which addresses this, at least in the context of Western sophisticates:
....the threshold for being offended is a very important tool for judging and ranking white people. Missing an opportunity to be outraged is like missing a reference to Derrida-it’s social death.
ETA: In the context of Islamic reaction to the Mohammed cartoons as well as the burning of a Koran, there may be some value for a demogogue to conjure up atrocities by some demonized enemy in order to unite his (and in this case, it will be “his”) followers. Westerners have done the same sorts of things as well, most obviously in wartime propaganda.
I’m not sure people can voluntarily self-modify in this way. Even if it’s possible, I don’t think most real people getting offended by real issues are primarily doing this.
I think such modification mostly happens on the level of evolution, especially cultural and memetic evolution. Individual humans are adaptation executers who can’t deliberately self-modify in this way, but those who are more pre-modified are more evolutionarily successful.
I’m not sure people can voluntarily self-modify in this way. Even if it’s possible, I don’t think most real people getting offended by real issues are primarily doing this.
Voluntary self-modification also requires a pre-existing desire to self-modify. I wouldn’t take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don’t really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him. The only point at which I would take such a pill is if I already cared enough about the honor of Mohammed that I was willing to die for him. Since people have risked their lives and earned lots of prison time protesting the Mohammed cartoons, even before they started any self-modification they must have had strong feelings about the issue.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you? I think you might be thinking of attempts to create in-group cohesion and signal loyalty by uniting against a common “offensive” enemy, something that I agree is common. But these attempts cannot be phrased in the consequentialist manner I suggested earlier and still work—they depend on a “we are all good, the other guy is all evil” mentality.
Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.
One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop. Even if you don’t like the latter part, I think the advice for the former might still be useful.
It’s a Schellingian idea: in conflict situations, it is often a rational strategy to pre-commit to act irrationally (i.e. without regards to cost and benefit) unless the opponent yields. The idea in this case is that I’ll self-modify to care about X far more than I initially do, and thus pre-commit to lash out if anyone does it.
If we have a dispute and I credibly signal that I’m going to flip out and create drama out of all proportion to the issue at stake, you’re faced with a choice between conceding to my demands or getting into an unpleasant situation that will cost more than the matter of dispute is worth. I’m sure you can think of many examples where people successfully get the upper hand in disputes using this strategy. The only way to disincentivize such behavior is to pre-commit credibly to be defiant in face of threats of drama. In contrast, if you act like a (naive) utilitarian, you are exceptionally vulnerable to this strategy, since I don’t even need drama to get what I want, if I can self-modify to care tremendously about every single thing I want. (Which I won’t do if I’m a good naive utilitarian myself, but the whole point is that it’s not a stable strategy.)
Now, the key point is that such behavior is usually not consciously manipulative and calculated. On the contrary—someone flipping out and creating drama for a seemingly trivial reason is likely to be under God-honest severe distress, feeling genuine pain of offense and injustice. This is a common pattern in human social behavior: humans are extremely good at detecting faked emotions and conscious manipulation, and as a result, we have evolved so that our brains lash out with honest strong emotion that is nevertheless directed by some module that performs game-theoretic assessment of the situation. This of course prompts strategic responses from others, leading to a strategic arms race without end.
The further crucial point is that these game-theoretic calculators in our brains are usually smart enough to assess whether the flipping out strategy is likely to be successful, given what might be expected in response. Basically, it is a part of the human brain that responds to rational incentives even though it’s not under the control of the conscious mind. With this in mind, you can resolve the seeming contradiction between the sincerity of the pain of offense and the fact that it responds to rational incentives.
All this is somewhat complicated when we consider issues of group conflict rather than individual conflict, but the same basic principles apply.
Do you have strategies for distinguishing between game theoretic exaggeration of offense vs. natural offense?
The question is better phrased by asking what will be the practical consequences of treating an offense as legitimate and ceasing the offending action (and perhaps also apologizing) versus treating it as illegitimate and standing your ground (and perhaps even escalating). Clearly, this is a difficult question of great practical value in life, and like every such question, it’s impossible to give a simple and universally applicable answer. (And of course, even if you know the answer in some concrete situation, you’ll need extraordinary composure and self-control to apply it if it’s contrary to your instinctive reaction.)
I don’t see the distinction you’re trying to make.
Tentatively—game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.
However, there’s another sort of breakdown of negotiations that just occurred to me. If A asks for less than they want because they think that’s all they can get and/or they’re trying to do a utilitarian calculation, they aren’t going to be happy even if they get it. This means they’re likely to push for more even if they get it, and then they start looking like a utility monster.
What do you mean by “satiated”?
From a utilitarian/consequentialist point of view, a desire being “satiated” simply means that the marginal utility gains from pursuing it further are less than opportunity cost of however much effort it takes.
Note that by this definition when a desire is satiated depends on how easy it is to pursue.
If you’re hungry you might feel as though you could just keep eating and eating. However, if enough food is available, you’ll stop and hit a point where more food would make you feel worse instead of better. You’ll get hungry again, but part of the cycle includes satiation. For purposes of discussion, I’m talking about most people here, not those with eating disorders or unusual metabolisms that affect their ability to feel satiety.
I think most people have a limit on their desire for status, though that might be more like the situation you describe. Few would turn down a chance to be the world’s Dictator for Life, but they’ve hit a point where trying for more status than they’ve got seems like too much trouble.
People have motives to increase their status, so we can check this box. Of course, this depends on phenotype, and some people do this much more than others.
You can’t self-modify to an arbitrary belief, but you can self-modify towards other beliefs that are close to yours in belief space. See my comment about political writers. You can seek out political leaders, political groups, or even just friends, with beliefs slightly more radical than yours along a certain dimension (and you might be inspired to do so with just small exposure to them). Over time, your beliefs may shift.
To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group. When you get mad about stuff and complain about it, you feel like you are accomplishing something.
The problem is that other people only care if you are with them or against them; they don’t care about your calculation.
The second problem is that it can be hard to distinguish these two things. People who have a sufficiently valid beef might be justified in making blame-based demands to stop offending, and your demand that they sound “respectful” and “reasonable” is itself unreasonable. Of course, people without a valid beef will use this exact same reasoning about why you can’t make a “tone argument” against them asking for them to sound more respectful and reasonable.
There might be a correlation between offense and the “validity” of the underlying issue, but this correlation is low enough that it can be hard to predict the validity of the underlying issue from how the offense reaction is expressed, which weakens the utility of the strategy you propose for identifying beefs.
However, your strategy might be useful as a Schelling Point for what sort of demands you’ll accept from others.
It may have been tough to get the message, because the British salmon example is hypothetical. A real-world example of some group succeeding in claims of offensive might be useful.
Okay. I formally admit I’m wrong about the “should usually stop offensive behavior” thing (or, rather, I don’t know if I’m wrong but I formally admit my previous arguments for thinking I was right no longer move me and I now recognize I am confused.)
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
“Request to change” is low status, while “demand to change” is high status. The whole point of taking offense is that some part of your brain detects a threat to your status or an opportunity to increase status, so how can it be “better” to act low status when you feel offended? Well, it may be better if you think you should dis-identify with that part of your brain, and believe that even if some part of your brain cares a lot about status, the real you don’t. But you have to make that case, or state that as an assumption, which you haven’t, as far as I can tell (although I haven’t carefully read this whole discussion).
Here’s an example in case the above isn’t clear. Suppose I’m the king of some medieval country, and one of my subjects publicly addresses me without kneeling or call me “your majesty”. Is it better for me to request him to do so in the language of harm-minimization (“I’m hurt that you don’t consider me majestic”?), or to make a demand phrased in the language of offense?
It would be much better for you to make a request in the language of harm-minimization. If you do that sort of thing often, then it may so damage the aura of divine right (or whatever superstition your monarchy rests on) in that country that your descendants will never again be able to perpetrate the sort of crimes that your ancestors committed with impunity.
I see at least two huge problems with the harm-minimization approach.
First, it requires interpersonal comparison of harm, which can make sense in very drastic cases (e.g. one person getting killed versus another getting slightly inconvenienced), but it usually makes no sense in controversial disputes such as these.
Second, even if we can agree on the way to compare harm interpersonally, the game-theoretic concerns discussed in this thread clearly show that naive case-by-case harm minimization is unsound, since any case-by-case consequences of decisions can be overshadowed by the implications of the wider incentives and signals they provide. This can lead to incredibly complicated and non-obvious issues, where the law of unintended consequences lurks behind every corner. I have yet to see any consequentialists even begin to grapple with this problem convincingly, on this issue or any other.
We may be talking at cross-purposes. Are you arguing that if someone says something I find offensive, it is more productive for me to respond in the form of “You are a bad person for saying that and I demand an apology?” than “I’m sorry, but I was really hurt by your statement and I request you not make it again”?
It depends; there is no universal rule. Either response could be more appropriate in different cases. There are situations where if someone’s statements overstep certain lines, the rational response is to deem this a hostile act and demand an apology with the threat of escalation. There are also situations where it makes sense to ask people to refrain from hurtful statements, since the hurt is non-strategic.
Also, what exactly do you mean by “productive”? People’s interests may be fundamentally opposed, and it may be that the response that better serves the strategic interest of one party can do this only at the other’s expense, with neither of them being in the right in any objective sense.
Maybe the most productive variant is just to ignore the offender/offence?
On a slightly unrelated note, one psychologist I know has demonstrated me that sometimes it’s more useful to agree with offence on the spot, whatever it is, and just continue with conversation. So I think in some situations this too may be a viable option.
So I can raise the status of my group by becoming a frequent complainer and encouraging my fellows to do likewise?
I won’t say that it never happens. I will say that the success prospects of that sort of strategy have been exaggerated of late.
Sure. See, for example, the rise in prominence of the Gnu Atheists (of which I am one).
Surely there are a great many reasons other than offense why, for various different things X, it might be (or seem) useful to me to stop you from doing thing X. For example, if thing X is “mocking my beliefs”: if my beliefs are widely respected, I and people like me will have a larger share of influence than if my beliefs are widely mocked.
Status games. There’s a satirical blog which addresses this, at least in the context of Western sophisticates:
ETA: In the context of Islamic reaction to the Mohammed cartoons as well as the burning of a Koran, there may be some value for a demogogue to conjure up atrocities by some demonized enemy in order to unite his (and in this case, it will be “his”) followers. Westerners have done the same sorts of things as well, most obviously in wartime propaganda.
I think such modification mostly happens on the level of evolution, especially cultural and memetic evolution. Individual humans are adaptation executers who can’t deliberately self-modify in this way, but those who are more pre-modified are more evolutionarily successful.