Where I have screwed up and gotten emotional, even after building a lot of credit about having real content in what I say, in more logic oriented circles, I have gotten instantly discredited and shunned.
In well functioning feminine/empathetic circles, the response to strong emotion is usually to pay more attention and take the topic seriously and get curious. When this happens the person calms down and the issue is addressed sanely. So I think the problem you’re looking at is a result of two different evolutionary strategies clashing.
This post is an expression of acknowledgement and deep dismay that “logic-oriented circles” and “empathetic circles” are considered mutually exclusive, and that they often attempt to deliberately shun and discredit each other.
I have yet to understand why, when someone is experiencing an overload of emotion, the logical response is not to listen to them until they calm down, and therefore increase the level of logic in the discussion.
I have yet to understand why, when someone is expressing a rational attempt to solve someone else’s emotional problem, the reaction is almost invariably hostility rather than appreciation for the attempt.
A proper rationalist recognizes that people are not always rational, and that tending to their emotional needs will lead to a more rational outcome in the long run.
A proper empath recognizes that emotions have consequences, and that these consequences need to be weighed rationally to the best of everyone’s rational capacity.
Keep honing your capacity to express your empathic understanding through logic, because it’s a sorely needed skill in these kinds of communities.
I have yet to understand why, when someone is experiencing an overload of emotion, the logical response is not to listen to them until they calm down, and therefore increase the level of logic in the discussion.
Actually, after discussions in this thread, I realized that this is a skill I should develop. (I don’t want to react like this all the time, just to be able to do this when I decide to; and to be aware of the situations where doing this might be the right choice.)
But whether it is the right choice or not, depends on circumstances. For this method to work well, there are a few conditions:
the person will eventually calm down and be able to communicate logically, because the person is not insane;
your listening will make the person calm down, because there are no other people interfering with the process and keeping the person emotionally overloaded (either by opposing the person, or by socially validating their emotional overload);
the person will be there to communicate with after they calm down, they will not go away (in an internet discussion, this is often unpredictable and likely);
you have enough time to be there when the person calms down (also, your patience could be depleted);
the person will not cause significant preventable damage during the emotional overload, in which case your priority could be to prevent or reduce the damage (the damage can include emotional damage for wittnesses of the emotional overload, damage to your reputation, etc.).
The situation is different in real life and on internet, whether you know the person or not, how much and how specifically do other people interfere. (Best circumstances: you know the person, you trust the person to be sane, there is no damage done, it’s just two of you together, and you both have enough time.)
I have yet to understand why, when someone is experiencing an overload of emotion, the logical response is not to listen to them until they calm down, and therefore increase the level of logic in the discussion. I have yet to understand why, when someone is expressing a rational attempt to solve someone else’s emotional problem, the reaction is almost invariably hostility rather than appreciation for the attempt.
Well, let’s back up a little.
Do you understand why, when I point a gun at your head and tell you to give me your wallet, the rational response is not necessarily to give me your wallet? More generally: do you understand why, when I threaten you, the rational response is not necessarily to accede to the threat?
Scenario A: There are a thousand people, P1-P1000, and one mugger M. M threatens P1, P1 gives M their wallet. The next day, M threatens P2 and the same thing happens. Lather, rinse, repeat. Eventually other people become muggers, since it’s a lucrative line of work. Eventually everyone’s wallet is stolen.
Scenario B: As above, but P1 does not give M their wallet, and M shoots P1 and flees walletless. The next day, M threatens P2 and the same thing happens. Lather, rinse, repeat. Eventually M gives up mugging because it’s not a lucrative line of work.
I don’t mean to suggest that either of these scenarios are realistic; they aren’t. But given a choice between A and B, however unrealistic that choice, do you understand how a rational agent might prefer B? (EDIT: Or how a society of rational agents might want to create a framework of enforceable precommitments that incentivizes B to a point such that P1, when being mugged, will prefer B?)
No, a rational agent with majority-human goals would prefer for others to do B, while itself doing A. At least if it cares about its life more than about the collective wallets of the others, modulo the impact that getting shot might conceivably have on the mugger’s future behavior.
Even using TDT makes no difference unless the agents valued a potential muggerless society over its own life. And the muggerless society would still be assuming that other agents are similar enough to use TDT as well as share the martyr trait, and don’t defect to save their own lives. It’s still “life or 1 wallet” for each individual. Not that TDT mandates valuing the collective wallets of arbitrarily many others over your own life.
Not to get sidetracked though, I take issue with taking rational as also implying “caring about the welfare of society” over “caring about whether I live or die”. A rational agent doesn’t need to be that altruistic, it can just be rational about how to keep alive (if that’s high on its priority list) effectively (the effectively captures the ‘instrumentally rational’ part), which would lead to giving up the wallet.
You can think of perfectly rational agents who crave nothing more than being shot the first chance they get (orthogonality thesis), so a “rational agent might prefer B” just comes down “an agent might prefer B”, which is obviously true since there can be agents preferring anything over anything.
IOW: “Do you understand how a rational agent might prefer B” is actually asking “Are you certain there can be no agents who prefer B”, for which the answer is a blanket “no” regardless of the B, so it’s not really pertinent to what y’all discussing, bless your hearts.
Certainly, given a third choice C in which others don’t give up their wallets and P1 does, P1 chooses C. Agreed.
I take issue with taking rational as also implying “caring about the welfare of society” over “caring about whether I live or die”.
I agree. I take issue with you describing the question I asked in those terms, as opposed to “preferring a small chance of dying and a large chance of keeping my wallet over a large chance of losing my wallet.”
Not that TDT mandates valuing the collective wallets of arbitrarily many others over your own life.
True, it doesn’t.
there can be agents preferring anything over anything.
Sure. Not what I meant, but certainly true.
Anyway, if the only way you can imagine a rational agent choosing B over A is to posit it has radically different values from yours, then I suspect that I am unable to explain the thing you initially said you didn’t understand. Tapping out now.
[EDIT: I just realized that the original question I was trying to answer wasn’t your question to begin with, it was someone else’s. Sorry; my error. Nevertheless tapping out here.]
P1: That of the whole system, which—if seen as a distributed agent—may indeed sacrifice a few of its sub-agents to get rid of mugging
or
P2: that of the individual agent getting mugged, who has to make a choice: Give up my wallet (including the impact that action will have on society on a whole) or give up my life.
The problem with how you’d like the probabilities to be presented is that you get “preferring a small chance of dying and a large chance of keeping my wallet over a large chance of losing my wallet” only when taking perspective P1.
Reason: An agent who has to actually make the choice is already being mugged and doesn’t get to say “a small chance of getting mugged”, because he is already getting mugged, no need for a counterfactual. So each agent who’s actually faced with the choice of whether to make the ultimate sacrifice only has a binary choice to make, with no probabilities other than 1 and 0 attached to it:
P(agent lives | gives up wallet) = 1. P (agent lives | doesn’t give up wallet) = 0.
I.e. no individual agent who has to immediately make that choice ever gets to include the “low probability of getting mugged” part, if he has to make the choice, then that case has already occurred, and it will always be its own life in exchange for saving the wallets of others.
Only “the society” in an agent-perspective would in that situation want to give up its sub-part (much to gain, not much to lose), not individual agents who value their lives a lot. They could do a precommitment (“If any of us get mugged, we promise each other to die for the cause of a crimeless future society”), but once it comes down to their lives, unless those are quite un-human agents (value-wise, instrumental-rationality-wise we posited for them to be rational), wouldn’t they just back out of it?
Compare it to defecting in a 1-iteration PD in which the payoff matrix is massively skewed in favor of defecting and you can control your opponent’s behavior.
(Most acts of standing up to a mugger and then getting shot probably have more to do with bravado and spur of the moment fight-choosing in the fight-or-flight situation, not with “I’ll die so that society may be muggerless”. Also, unlike in the scenario we’re discussing, those resisting the mugger in real-world scenarios have a significant chance of not dying to him, or even defeating him. I’d reckon that also plays a major role in choosing when to fight; it’s not strictly a self-sacrifice. Not even with religious martyrs, since they have that imaginary heaven concept to weigh the scales. An agent who deems self-sacrifice for a potential positive impact on society as the most effective way of accomplishing its goals (which would necessary be the case for a rational agent to choose so) doesn’t share many of its values with an overwhelming majority of humans. Intuitions about “standing up to muggers” muddle the assessment, I guess if we transformed the situation into an equivalent formulation with the mugger being exchanged by an all-powerful agent with a killing booth and a thing for wallets giving you a choice (with the same payoff matrix for the others in society), my estimation would be less controversial.)
They could do a precommitment [..] but [..] wouldn’t they just back out of it?
So, first, I completely agree that precommitment is a key issue here. “An agent who has to actually make the choice is already being mugged,” as you say, is reliably true only if precommitment is impossible; if precommitment is possible then it’s potentially false.
And perhaps you’re right that humans are incapable of reliable precommitment in these sorts of contexts… that, as you suggest, whatever commitments a rational human agent makes, they’ll just back out of it once it comes down to their lives. If that’s true, then scenario B is highly unlikely, and a rational human agent doesn’t choose it.
I agree that real-world acts of mugger-defiance are not the result of a conscious choice to die so society will go muggerless.
I agree that an agent who deems self-sacrifice for a collective impact as the most effective way of accomplishing its goals in a broad range of contexts doesn’t share many of its values with an overwhelming majority of humans.
I am not as confident as you sound that an agent who deems self-sacrifice for a collective impact as the most effective way of accomplishing its goals in no contexts at all doesn’t share many of its values with an overwhelming majority of humans.
Well, whenever I think of e.g, some historical human figure, and imagine what an instrumentally-rational version of that figure would look like, I feel like there is a certain tension: Would a really, really effective (human) plundering Hun still value plundering? Would an instrumentally-superpowered patriot still value some country-concept (say, Estonia) over his own life? I’m not questioning the general orthogonality thesis with this, just its applicability to humans.
Are there any historical examples you think of where humans die for a cause, and where we’d expect (albeit all speculation) an instrumentally empowered human to still die for that cause? Still value that Estonian flag and the fuzzy feelings it brings over his own life, even when understanding that it was just some brainwashing, starting at his infant stage?
Regarding the precommitment: The problem is that an agent can always still change its mind when it’s at that “life or wallet” junction. The reason being a bit tricky: If there is a credible precommitment with outside enforcement (say you need to present your wallet daily to the authorities), then the agent will never get to the “life or wallet” junction, it’ll be a “life and the severe repercussions of breaking your precommitment or wallet and the possible benefits from the precommitment of sacrificing yourself, say a stipend for some family members” (which depressingly is how terrorist organisations sweeten the deal).
So whenever it’s actually just a “life or wallet” decision, any prior decision can be changed at a moment’s notice, being in the absence of real-world and hard-to-avoid consequences from precommitment-defecting. And a rational agent which can change its action and evaluates the current circumstances as warranting a change, should change. I.e. it’s hard for any rational agent to precommit and stay true to that precommitment if it’s not forced to. And the presence of such force would alter the “life or wallet” hypothetical.
I agree that a “life and the severe repercussions of breaking your precommitment or wallet and the possible benefits from the precommitment of sacrificing yourself” decision, as opposed to a “life or wallet” decision with no possible benefits from such precommitments, is one way a human agent might end up choosing scenario B over scenario A even when mugged. (It’s not the only one, but as you say, it’s a typical one in the real world.)
If you let me know how I could have worded my original hypothetical to not exclude options like that, I would appreciate the guidance. I certainly didn’t mean to exclude them (or the other possibilities).
do you understand how a rational agent might prefer B?
to
do you understand how a society of rational agents might want to create a framework of enforceable precommitments that incentivizes B to a point such that P1, when being mugged, will prefer B?
For example, if anyone who gave up a wallet later received a death sentence for doing so, the loss of life would be factored out—in effect, being mugged would become a death sentence regardless of your choice, in which case it’d be much easier hanging on to your purse for the good of the many. (Even if society killing you otherwise could be construed as having a slightly alienating effect.)
Effective program which is based on the premise that a lot of bad behavior is the result of stress, and adding stress to ill-behaved people doesn’t work. I’d been meaning to post it here anyway because it’s a change in a high school discipline which requires changing a number of factors at the same time.
That analogy is too convoluted to be worth unpacking.
But some people react with hostility to A’s “rational problem solving” in the face of B’s “emotional problems” because they see A as a threat. Which A might well be; this sort of framing can be a significant challenge to B’s credibility. (More generally, it’s a status challenge.) Similarly, some people react with hostility to B’s “overload of emotion” because they see that as a threat.
So understanding why acceding to a perceived threat isn’t necessarily the only rational response seems important if I want to understand the thing ialdabaoth has yet to understand.
As for stress-reduction as a behavior-manipulation tool… I’m all in favor of it when the power differential is sufficiently high in my favor. When the differential favors the ill-behaved person, though… well, I’m less sanguine. For example: yes, I understand being X in public frequently causes anxiety in non-Xes, which can sometimes lead them to bad behavior, but for many Xes the (oft suggested) response of not being X in public so as to reduce the incidence of that bad behavior seems importantly unjust.
(nods) Fair enough. In cases where the underpowered person happens to know techniques for lowering the anxiety of the overpowered person without suffering additional penalties by so doing (e.g., has been trained in NVC), I’m more inclined to endorse them doing so.
This post is an expression of acknowledgement and deep dismay that “logic-oriented circles” and “empathetic circles” are considered mutually exclusive, and that they often attempt to deliberately shun and discredit each other.
I have yet to understand why, when someone is experiencing an overload of emotion, the logical response is not to listen to them until they calm down, and therefore increase the level of logic in the discussion.
I have yet to understand why, when someone is expressing a rational attempt to solve someone else’s emotional problem, the reaction is almost invariably hostility rather than appreciation for the attempt.
A proper rationalist recognizes that people are not always rational, and that tending to their emotional needs will lead to a more rational outcome in the long run.
A proper empath recognizes that emotions have consequences, and that these consequences need to be weighed rationally to the best of everyone’s rational capacity.
Keep honing your capacity to express your empathic understanding through logic, because it’s a sorely needed skill in these kinds of communities.
Actually, after discussions in this thread, I realized that this is a skill I should develop. (I don’t want to react like this all the time, just to be able to do this when I decide to; and to be aware of the situations where doing this might be the right choice.)
But whether it is the right choice or not, depends on circumstances. For this method to work well, there are a few conditions:
the person will eventually calm down and be able to communicate logically, because the person is not insane;
your listening will make the person calm down, because there are no other people interfering with the process and keeping the person emotionally overloaded (either by opposing the person, or by socially validating their emotional overload);
the person will be there to communicate with after they calm down, they will not go away (in an internet discussion, this is often unpredictable and likely);
you have enough time to be there when the person calms down (also, your patience could be depleted);
the person will not cause significant preventable damage during the emotional overload, in which case your priority could be to prevent or reduce the damage (the damage can include emotional damage for wittnesses of the emotional overload, damage to your reputation, etc.).
The situation is different in real life and on internet, whether you know the person or not, how much and how specifically do other people interfere. (Best circumstances: you know the person, you trust the person to be sane, there is no damage done, it’s just two of you together, and you both have enough time.)
Well, let’s back up a little.
Do you understand why, when I point a gun at your head and tell you to give me your wallet, the rational response is not necessarily to give me your wallet? More generally: do you understand why, when I threaten you, the rational response is not necessarily to accede to the threat?
Not really, no—but I may have an impairment in this regard. Can you walk me through it?
Compare the following two scenarios.
Scenario A: There are a thousand people, P1-P1000, and one mugger M. M threatens P1, P1 gives M their wallet. The next day, M threatens P2 and the same thing happens. Lather, rinse, repeat. Eventually other people become muggers, since it’s a lucrative line of work. Eventually everyone’s wallet is stolen.
Scenario B: As above, but P1 does not give M their wallet, and M shoots P1 and flees walletless. The next day, M threatens P2 and the same thing happens. Lather, rinse, repeat. Eventually M gives up mugging because it’s not a lucrative line of work.
I don’t mean to suggest that either of these scenarios are realistic; they aren’t. But given a choice between A and B, however unrealistic that choice, do you understand how a rational agent might prefer B? (EDIT: Or how a society of rational agents might want to create a framework of enforceable precommitments that incentivizes B to a point such that P1, when being mugged, will prefer B?)
No, a rational agent with majority-human goals would prefer for others to do B, while itself doing A. At least if it cares about its life more than about the collective wallets of the others, modulo the impact that getting shot might conceivably have on the mugger’s future behavior.
Even using TDT makes no difference unless the agents valued a potential muggerless society over its own life. And the muggerless society would still be assuming that other agents are similar enough to use TDT as well as share the martyr trait, and don’t defect to save their own lives. It’s still “life or 1 wallet” for each individual. Not that TDT mandates valuing the collective wallets of arbitrarily many others over your own life.
Not to get sidetracked though, I take issue with taking rational as also implying “caring about the welfare of society” over “caring about whether I live or die”. A rational agent doesn’t need to be that altruistic, it can just be rational about how to keep alive (if that’s high on its priority list) effectively (the effectively captures the ‘instrumentally rational’ part), which would lead to giving up the wallet.
You can think of perfectly rational agents who crave nothing more than being shot the first chance they get (orthogonality thesis), so a “rational agent might prefer B” just comes down “an agent might prefer B”, which is obviously true since there can be agents preferring anything over anything.
IOW: “Do you understand how a rational agent might prefer B” is actually asking “Are you certain there can be no agents who prefer B”, for which the answer is a blanket “no” regardless of the B, so it’s not really pertinent to what y’all discussing, bless your hearts.
Certainly, given a third choice C in which others don’t give up their wallets and P1 does, P1 chooses C. Agreed.
I agree.
I take issue with you describing the question I asked in those terms, as opposed to “preferring a small chance of dying and a large chance of keeping my wallet over a large chance of losing my wallet.”
True, it doesn’t.
Sure. Not what I meant, but certainly true.
Anyway, if the only way you can imagine a rational agent choosing B over A is to posit it has radically different values from yours, then I suspect that I am unable to explain the thing you initially said you didn’t understand. Tapping out now.
[EDIT: I just realized that the original question I was trying to answer wasn’t your question to begin with, it was someone else’s. Sorry; my error. Nevertheless tapping out here.]
It’s a matter of whose perspective you take:
P1: That of the whole system, which—if seen as a distributed agent—may indeed sacrifice a few of its sub-agents to get rid of mugging
or
P2: that of the individual agent getting mugged, who has to make a choice: Give up my wallet (including the impact that action will have on society on a whole) or give up my life.
The problem with how you’d like the probabilities to be presented is that you get “preferring a small chance of dying and a large chance of keeping my wallet over a large chance of losing my wallet” only when taking perspective P1.
Reason: An agent who has to actually make the choice is already being mugged and doesn’t get to say “a small chance of getting mugged”, because he is already getting mugged, no need for a counterfactual. So each agent who’s actually faced with the choice of whether to make the ultimate sacrifice only has a binary choice to make, with no probabilities other than 1 and 0 attached to it:
P(agent lives | gives up wallet) = 1. P (agent lives | doesn’t give up wallet) = 0.
I.e. no individual agent who has to immediately make that choice ever gets to include the “low probability of getting mugged” part, if he has to make the choice, then that case has already occurred, and it will always be its own life in exchange for saving the wallets of others.
Only “the society” in an agent-perspective would in that situation want to give up its sub-part (much to gain, not much to lose), not individual agents who value their lives a lot. They could do a precommitment (“If any of us get mugged, we promise each other to die for the cause of a crimeless future society”), but once it comes down to their lives, unless those are quite un-human agents (value-wise, instrumental-rationality-wise we posited for them to be rational), wouldn’t they just back out of it?
Compare it to defecting in a 1-iteration PD in which the payoff matrix is massively skewed in favor of defecting and you can control your opponent’s behavior.
(Most acts of standing up to a mugger and then getting shot probably have more to do with bravado and spur of the moment fight-choosing in the fight-or-flight situation, not with “I’ll die so that society may be muggerless”. Also, unlike in the scenario we’re discussing, those resisting the mugger in real-world scenarios have a significant chance of not dying to him, or even defeating him. I’d reckon that also plays a major role in choosing when to fight; it’s not strictly a self-sacrifice. Not even with religious martyrs, since they have that imaginary heaven concept to weigh the scales. An agent who deems self-sacrifice for a potential positive impact on society as the most effective way of accomplishing its goals (which would necessary be the case for a rational agent to choose so) doesn’t share many of its values with an overwhelming majority of humans. Intuitions about “standing up to muggers” muddle the assessment, I guess if we transformed the situation into an equivalent formulation with the mugger being exchanged by an all-powerful agent with a killing booth and a thing for wallets giving you a choice (with the same payoff matrix for the others in society), my estimation would be less controversial.)
So, first, I completely agree that precommitment is a key issue here. “An agent who has to actually make the choice is already being mugged,” as you say, is reliably true only if precommitment is impossible; if precommitment is possible then it’s potentially false.
And perhaps you’re right that humans are incapable of reliable precommitment in these sorts of contexts… that, as you suggest, whatever commitments a rational human agent makes, they’ll just back out of it once it comes down to their lives. If that’s true, then scenario B is highly unlikely, and a rational human agent doesn’t choose it.
I agree that real-world acts of mugger-defiance are not the result of a conscious choice to die so society will go muggerless.
I agree that an agent who deems self-sacrifice for a collective impact as the most effective way of accomplishing its goals in a broad range of contexts doesn’t share many of its values with an overwhelming majority of humans.
I am not as confident as you sound that an agent who deems self-sacrifice for a collective impact as the most effective way of accomplishing its goals in no contexts at all doesn’t share many of its values with an overwhelming majority of humans.
(Short tangent:)
Well, whenever I think of e.g, some historical human figure, and imagine what an instrumentally-rational version of that figure would look like, I feel like there is a certain tension: Would a really, really effective (human) plundering Hun still value plundering? Would an instrumentally-superpowered patriot still value some country-concept (say, Estonia) over his own life? I’m not questioning the general orthogonality thesis with this, just its applicability to humans.
Are there any historical examples you think of where humans die for a cause, and where we’d expect (albeit all speculation) an instrumentally empowered human to still die for that cause? Still value that Estonian flag and the fuzzy feelings it brings over his own life, even when understanding that it was just some brainwashing, starting at his infant stage?
Regarding the precommitment: The problem is that an agent can always still change its mind when it’s at that “life or wallet” junction. The reason being a bit tricky: If there is a credible precommitment with outside enforcement (say you need to present your wallet daily to the authorities), then the agent will never get to the “life or wallet” junction, it’ll be a “life and the severe repercussions of breaking your precommitment or wallet and the possible benefits from the precommitment of sacrificing yourself, say a stipend for some family members” (which depressingly is how terrorist organisations sweeten the deal).
So whenever it’s actually just a “life or wallet” decision, any prior decision can be changed at a moment’s notice, being in the absence of real-world and hard-to-avoid consequences from precommitment-defecting. And a rational agent which can change its action and evaluates the current circumstances as warranting a change, should change. I.e. it’s hard for any rational agent to precommit and stay true to that precommitment if it’s not forced to. And the presence of such force would alter the “life or wallet” hypothetical.
I agree that a “life and the severe repercussions of breaking your precommitment or wallet and the possible benefits from the precommitment of sacrificing yourself” decision, as opposed to a “life or wallet” decision with no possible benefits from such precommitments, is one way a human agent might end up choosing scenario B over scenario A even when mugged. (It’s not the only one, but as you say, it’s a typical one in the real world.)
If you let me know how I could have worded my original hypothetical to not exclude options like that, I would appreciate the guidance. I certainly didn’t mean to exclude them (or the other possibilities).
Maybe change
to
For example, if anyone who gave up a wallet later received a death sentence for doing so, the loss of life would be factored out—in effect, being mugged would become a death sentence regardless of your choice, in which case it’d be much easier hanging on to your purse for the good of the many. (Even if society killing you otherwise could be construed as having a slightly alienating effect.)
Edited accordingly. Thanks.
What is your analogy between the mugger and the inconveniently emotional or inconveniently logical person?
http://acestoohigh.com/2012/04/23/lincoln-high-school-in-walla-walla-wa-tries-new-approach-to-school-discipline-expulsions-drop-85/
Effective program which is based on the premise that a lot of bad behavior is the result of stress, and adding stress to ill-behaved people doesn’t work. I’d been meaning to post it here anyway because it’s a change in a high school discipline which requires changing a number of factors at the same time.
That analogy is too convoluted to be worth unpacking.
But some people react with hostility to A’s “rational problem solving” in the face of B’s “emotional problems” because they see A as a threat. Which A might well be; this sort of framing can be a significant challenge to B’s credibility. (More generally, it’s a status challenge.) Similarly, some people react with hostility to B’s “overload of emotion” because they see that as a threat.
So understanding why acceding to a perceived threat isn’t necessarily the only rational response seems important if I want to understand the thing ialdabaoth has yet to understand.
As for stress-reduction as a behavior-manipulation tool… I’m all in favor of it when the power differential is sufficiently high in my favor. When the differential favors the ill-behaved person, though… well, I’m less sanguine. For example: yes, I understand being X in public frequently causes anxiety in non-Xes, which can sometimes lead them to bad behavior, but for many Xes the (oft suggested) response of not being X in public so as to reduce the incidence of that bad behavior seems importantly unjust.
Non-Violent Communication is a system for lowering anxiety in confrontations without giving in.
(nods) Fair enough. In cases where the underpowered person happens to know techniques for lowering the anxiety of the overpowered person without suffering additional penalties by so doing (e.g., has been trained in NVC), I’m more inclined to endorse them doing so.