On the topic of the “poisonous pleasure” of moralistic critique:
I am struck by the will to emotional neutrality which appears to exist among many “aspies”. It’s like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the “emotional games”, and they refuse to resist in the usual way when those games are directed against them—the usual form of defense being a counterattack—because that would make them just as bad as the aggressor normals.
For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason—that being able to fight back is empowering—but because it’s actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.
If the message you intend to send is “I am secure in my status. The attacker’s pathetic attempts at reducing my status are beneath my notice.”, what should you do? You don’t seem to think that ignoring the “attacks” is the correct course of action.
This is a genuine question. I do not know the answer and I would like to know what others think.
“I am secure in my status. The attacker’s pathetic attempts at reducing my status are beneath my notice.”
I think the real message is “The attacker’s attempt to reduce my status is too ineffective to need a response”.
On a good day I’d say “okay” so he knows I heard him, and then start a conversation with someone else, unless there’s some instrumental value in confronting him or continuing the conversation given that I now know he’s playing status games. I don’t know a good way to carry on a useful conversation with someone who is playing status games, so I’m stuck in that situation too.
If the message you intend to send is “I am secure in my status. The attacker’s pathetic attempts at reducing my status are beneath my notice.”, what should you do?
Ignoring the attempts is a good default. It gives a decent payoff while being easy to implement. More advanced alternatives are the witty, incisive comeback or the smooth, delicately calibrated communication of contempt for the attacker to the witnesses. In the latter case especially body language is the critical component.
I’m referring to that. Sending that message is an implicit lie—well, you could call it a “social fiction”, if you like a less loaded word.
It is also a message that is very likely to be misunderstood (I don’t yet know my way around lesswrong well enough to find it again, but I think there’s an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perception of what you said).
So if your true reaction is “you are just trying to reduce my status, and I don’t think it’s worth it for me to discuss this further”, my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.
I hope I was able to clarify my distinction between having a true reaction, and displaying it. In a nutshell, if you notice something, you have a reaction, and by not displaying it (when it is expected of you), you create an ambiguous situation that is not likely to communicate to the other person what you want it to communicate.
I don’t think these are normally useful ways of thinking about status posturing. Verbalising this stuff is a faux pas in the overwhelming majority of human social groups.
I’m not sure if I disagree with you on whether the message is “very likely” to be understood. In my limited experience, and with my below average people reading skills, I’d say that most status jockeying in non-intimate contexts is obvious enough for me to notice if I’m paying attention to the interaction.
The post you meant is probably Illusion of Transparency. I contend that it applies less strongly to in person status jockeying than to lingual information transfer. I suggest you watch a clip of a foreign language movie if you disagree.
So if your true reaction is “you are just trying to reduce my status, and I don’t think it’s worth it for me to discuss this further”, my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.
This can work sometimes but it in most contexts it is difficult to pull off without sounding awkward or crude. At best it conveys that you are aware that social dynamics exist but aren’t quite able to navigate them smoothly yet. Mind you unless there is a pre-existing differential in status or social skills in their favour they will tend to come off slightly worse than you in the exchange. A costly punishment.
It’s like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up.
Mitchell, yes, that was me back in high school. But IIRC I thought I was doing this.
You don’t need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them. If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider. You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.
If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider.
Then why didn’t humans evolve to perform rational calculations of whether retaliation is cost-effective instead of uncontrollable rage? The answer, of course, is largely in Schelling. The propensity to lose control when enraged is a strategic precommitment to lash out if certain boundaries are overstepped.
Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I’d say that in most situations in which it enters the strategic calculations it’s still greatly beneficial.
Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I’d say that in most situations in which it enters the strategic calculations it’s still greatly beneficial.
I agree, or at least agree for situations where people are in their native culture or one they’re intimately familiar with, so that they’re relatively well-calibrated. What I wrote was poorly phrased to the point of being wrong without lawyerly cavilling.
To rephrase more carefully; you can act in a manner that gets the same results as anger without being angry. You can have a better, more strategic response. I’m not claiming it’s easy to rewire yourself like this, but it’s possible. If your natural anger response is anomalously low, as is the case for myself and many others on the autism spectrum, and you’re attempting some relatively hardcore rewiring anyway, why not go for the strategic analysis instead of trying to decrease your threshold for blowing up?
I’m not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent’s incentives to create these conditions, so if the strategy works, you don’t actually have to perform the irrational act, which remains just a counterfactual threat.
In particular, if you enter confrontations only when it is cost-effective to do so, this may leave you vulnerable to a strategy that maneuvers you into a situation where surrender is less costly than fighting. However, if you’re precommitted to fight even irrationally (i.e. if the cost of fighting is higher than the prize defended), this makes such strategies ineffective, so the opponent won’t even try them.
So for example, suppose you’re negotiating the price you’ll charge for some work, and given the straightforward cost-benefit calculations, it would be profitable for you to get anything over $10K, while it would be profitable for the other party to pay anything under $20K, so the possible deals are in that range. Now, if your potential client resolutely refuses to pay more than $11K, and if it’s really impossible for you to get more, it is still rational for you to take that price rather than give up on the deal. However, if you are actually ready to accept this price given no other options, this gives the other party the incentive to insist with utter stubbornness that no higher price is possible. On the other hand, if you signal credibly that you’d respond to such a low offer by getting indignant that your work is valued so little and leaving angrily, then this strategy won’t work, and you have improved your strategic position—even though getting angry and leaving is irrational assuming that $11K really is the final offer.
(Clearly, the strategy goes both ways, and the buyer is also better off if he gets “irrationally” indignant at high prices that still leave him with a net plus. Real-life negotiations are complicated by countless other factors as well. Still, this is a practically relevant example of the basic principle.)
Now of course, an ideally rational agent with a perfect control of his external behavior would play the double game of signaling such precommitment convincingly but falsely and yielding if the bluff is called (or perhaps not if there would be consequences on his reputation). This however is normally impossible for humans, so you’re better off with real precommitment that your emotional propensity to anger provides. Of course, if your emotional propensities are miscalibrated in any way, this can lead to strategic blunders instead of benefits—and the quality of this calibration is a very significant part of what differentiates successful from unsuccessful people.
I’m not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent’s incentives to create these conditions, so if the strategy works, you don’t actually have to perform the irrational act, which remains just a counterfactual threat.
I agree with what you are saying and would perhaps have described it as “ways that would otherwise have been irrational”.
I obviously need to work on phrasing things more clearly.
Anger functions as a strategic precommitment which improves your bargaining position. Two examples of a precommitment would be as follows (1) A car buyer going to a dealership with a contract stating that for every dollar they pay over a predetermined price (manufacturers price plus average industry margin presumably) they must pay ten dollars to some other party (who can credibly hold them to it). (2) Destroying your means of retreat when you plan aggression against another party, so that you have no motive to hold anything back, like Cortes did when he burned his ships upon landing in Mexico.
Now (1) is more like anger than (2) is because it’s a public signal, but both of them reduce your options to strengthen your position, (1) in a negotiation, (2) as a committed, cohesive group. (1) is very much like throwing the steering wheel out the window in the game of chicken. Pretending your hands are tied and you can’t go above/below the stated price without going further up the chain of command is actually one of those negotiating tricks that are in all the books, like the car salesman who goes “Oh, I’m not sure; I’ll have to consult my boss” and smokes a cigarette in the office before coming back and agreeing to a lower price.
Swimmer963 asked me:
If you’re not angry, what would motivate you to do any of those things?
and I replied
If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it’s more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you.
Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.
which I think shows at least a weak grasp of how these precommitments can work; one builds a reputation, and given that we’re meatbags with malleable conceptions of self, a reason to make such precommitments even when they cannot effect our reputation.
If “normally impossible” means very, very hard I agree completely; robust self-behavioural modification is hard even for small things, never mind for something as difficult to bring into conscious awareness or control as anger.
Would you consider expanding upon quality of calibration?
Yes, I think we understand each other now. Funny, I had the “must consult my boss” trick pulled on me just a few days ago by a guy whom I called up to haul off some trash. I still managed to make him lower the supposedly boss-mandated price by about 20%. (And when I later thought about the whole negotiation more carefully, I realized I could have probably lowered it much more.)
Regarding the quality of calibration, it’s straightforward. Emotional reactions can serve as strategic precommitments the way we just discussed, and often they also serve as decision heuristics in problems where one lacks the necessary information and processing power for a conscious rational calculation. In both cases, they can be useful if they are well-calibrated to produce strategically sound actions, but if they’re poorly calibrated, they can lead to outright irrational and self-destructive behavior.
So for example, if you fail to feel angry indignation when appropriate, you’re in danger of others maneuvering you into a position where they’ll treat you as a doormat, both in business and in private life. On the other hand, if such emotions are triggered too easily, you’ll be perceived as short-tempered, unreasonable, and impossible to deal with, again with bad consequences, both professional and private.
It seems to me that the key characteristic that distinguishes high achievers is the excellent calibration of their emotional reactions—especially compared to people who are highly intelligent and conscientious and nevertheless have much less to show for it.
You don’t need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them.
If you’re not angry, what would motivate you to do any of those things? If someone injures me in some way or takes something that I wanted, usually neither hitting them nor spreading gossip about them will in any way help me repair my injury or get back what they took from me. So I don’t. Unless I’m angry, in which case it kind of just happens, and then I regret it because it usually makes the situation worse.
If you’re not angry, what would motivate you to do any of those things?
Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you’re serious about something. This seems to be particularly true when dealing with people who aren’t inclined to use more ‘intellectual’ communication methods.
Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you’re serious about something. This seems to be particularly true when dealing with people who aren’t inclined to use more ‘intellectual’ communication methods.
I think you’re right. Mind you as someone who is interested in communication that doesn’t involve control via strong emotional responses I most definitely don’t reward bad behaviour by giving the other what they want. This applies especially if they use the aggressive tactics of the kind mentioned here. I treat those as attacks and respond in such a way as to discourage any further aggression by them or other witnesses.
This is not to say I don’t care about the other’s experience or desires, nor does it mean that a strong emotional response will rule out me giving them what they want. If the other is someone that I care about I will encourage them towards expressions that actually might work for getting me to give them what they want. I’ll guide them towards asking me for something and perhaps telling me why it matters to them. This is more effective than making demands or attempting to emotionally control.
I’m far more generous than I am vulnerable to dominance attempts and I’m actually willing to consciously make myself vulnerable to personal requests to just behind the line of being an outright weakness because I have a strong preference for that mode of communication. Mind you even this tends to be strongly conditional on a certain degree of reciprocation.
Point being that I agree with the sometimes qualifier; the benefit to such displays (genuine or otherwise) is highly variable. We also have the ability to influence whether people make such displays to us. Partly by the incentive they have and partly by simple screening.
Seems true. Nevertheless I’ve never used it in this way. This may have more to do with my personality than anything: from what I’ve read here, I’m more of a conformist than the average Less Wrong reader, and I put a higher value on social harmony. I hate arguments that turn personal and emotional.
I might hit someone because they’re pointing a gun at me and I believe hitting them is the most efficient way to disarm them. I might hit someone because they did something dangerous and I believe hitting them is the most efficient way to condition them out of that behavior. I might spread gossip about them because they are using their social status in dangerous ways and I believe gossiping about them is the best available way of reducing their status.
None of those cases require anger, and they might even make the situation better. (Or they might not.)
Or, less nobly, I might hit someone because they have $100 I want, and I think that’s the most efficient way to rob them. I might spread gossip about them because we’re both up for the same promotion and I want to reduce their chance of getting it.
None of those cases require anger, either. (And, hey, they might make the situation better, too. Or they might not.)
I suppose the context of my comment was limited to a) me personally (I don’t have any desire to steal money or reduce other people’s chances of promotion) and b) to the situations I have encountered in the past (no guns or danger involved). Your points are very valid though.
If you’re not angry, what would motivate you to do any of those things?
If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it’s more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you.
Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.
Point taken. I am a doormat. People have told me this over and over again, so I probably have a reputation as a doormat, but that has certain value in itself; I have a reputation as someone who is dependable, loyal, and does whatever is asked of me, which is useful in a work context.
It has a function and we neutralize it at our peril.
Can you be more specific? What exactly are the dangers of neutralizing our “inner moralizers”?
Also, see my previouscomments, which may be applicable here. I speculate that “aspies” free up a large chunk of the brain for other purposes when they ignore “emotional games”, and it’s not clear to me that they should devote more of their cognitive resources toward such games.
Can you be more specific? What exactly are the dangers of neutralizing our “inner moralizers”?
Having brought up this topic, I find that I’m reluctant to now do the hard work of organizing my thoughts on the matter. It’s obvious that the ability to moralize has a tactical value, so doing without it is a form of personal or social disarmament. However, I don’t want to leave the answer at that Nietzschean or Machiavellian level, which easily leads to the view that morality is a fraud but a useful fraud, especially for deceptive amoralists. I also don’t want to just say that the human utility function has a term which attaches significance to the actions, motives and character of other agents, in such a way that “moralizing” is sometimes the right thing to do; or that labeling someone as Bad is an efficient heuristic.
I have glimpsed two rather exotic reasons for retaining one’s capacity for “judging people”. The first is ontological. Moral judgments are judgments about persons and appeal to an ontology of persons. It’s important and useful to be able to think at that level, especially for people whose natural inclination is to think in terms of computational modules and subpersonal entities. The second is that one might want to retain the capacity to moralize about oneself. This is an intriguing angle because the debate about morality tends to revolve around interactions between persons, whether morality is just a tool of the private will to power, etc. If the moral mode can be applied to one’s relationship to reality in general (how you live given the facts and uncertainties of existence, let’s say), and not just to one’s relationship to other people, that gives it an extra significance.
The best answer to your question would think through all that, present it in an ordered and integrated fashion, and would also take account of all the valid reasons for not liking the moralizing function. It would also have to ground the meaning of various expressions that were introduced somewhat casually. But—not today.
In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an “antidote” to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can’t be decided rationally.)
If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that “the other guy doesn’t deserve the money if he doesn’t work for it”, and from that moral strongpoint refuse to cooperate. Now if the rationalist knows he’s working with a moralist, he’ll also know that his immoral strategy won’t work, so he won’t attempt it in the first place—a victory for the moralist in a conflict that hasn’t even occurred (in fact, the moralist need never know that the rationalist intended to cheat him).
This is different from simply acting irrationally in that the moralist’s reaction remains predictable.
So it is possible that moral indignation helps me to prevent other people from manouevering me into a position where I don’t want to be.
It occurs to me that I’m not less judgmental than the typical human, just judgmental in a different way and less vocal about it (except in the “actions speak louder than words” sense). My main judgement of a person is just whether it is worth my time to talk to / work with / play with / care about that person, and if my “inner moralizer” says no, I simply ignore or get away from them. I’m not sure if I can be considered an “aspie” but I suspect many of them are similar in this way.
Compared to what’s more typical, this method of “moralizing” seems to have all of the benefits you listed (except the last one, “If the moral mode can be applied to one’s relationship to reality in general”, which I don’t understand) but fewer costs. It is less costly in mental resources, and less likely to get you involved in negative-sum situations. I note that it wouldn’t have worked well in an ancestral environment where you lived in a small tribe and couldn’t ignore or get away from others freely, which perhaps explains why it doesn’t come naturally to most people despite its advantages.
the benefits you listed (except the last one, “If the moral mode can be applied to one’s relationship to reality in general”, which I don’t understand)
See the comments here on the psychological meaning of “kingship”. That’s one aspect of the “relationship to reality” I had in mind. If you subtract from consideration all notions of responsibility towards other people, are all remaining motivations fundamentally hedonistic in nature, or is there a sense in which you could morally criticize what you were doing (or not doing), even if you were the only being that existed?
There is a tendency, in discussions here and elsewhere about ethics, choice, and motivation, either to reduce everything to pleasure and pain, or to a functionalist notion of preference which makes no reference to subjective states at all. Eliezer advocates a form of moral realism (since he says the word “should” has an objective meaning), but apparently the argument depends on behavior (in the real world, you’d pull the child on the train tracks out of harm’s way) and on the hypothesized species-universality of the relevant cognitive algorithms. But that doesn’t say what is involved in making the judgment, or in making the meta-judgment about how you would act. Subjectively, are we to think of such judgments as arising from emotional reactions (e.g. basic emotions like disgust or fear)? It leaves open the question of whether there is a distinctive moral modality—a mode of perception or intuition—and my further question would be whether it only applies to other people (or to relations between you the individual and other people), or whether it can ever apply to yourself in isolation. In culture, I see a tendency to regard choices about how to live (that don’t impact on other people) as aesthetic choices rather than ethical choices.
The Zen thing to do would be to flame you with absurd viciousness for being excessively vague in your own request for clarification, in the hope that your response would be combative (rather than purely analytical), but still appropriate—because then you would have provided the example yourself. But that’s a high-risk conversational strategy. :-)
For someone [with at least a shade of Asperger’s Syndrome], it may be important to get in touch with their inner moralizer!
Agreed, although I don’t know that I have any Asperger’s. Here’s a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer. I didn’t record it, so it’s paraphrased from memory:
X: It’s really important to me what happens to the species a billion years from now. (X actually made a much longer statement, with examples.)
Me: Well, you’re human, so I don’t think you can really have concerns about what happens a billion years from now because you can’t imagine that period of time. It seems much more likely that you perceive talking about things a billion years off to be high status, and what you really want is the short term status gain from saying you have impressive plans. People aren’t really that altruistic.
X: I hate it when people point out that there are two of me. The status-gaming part is separate from the long-term planning part.
Me: There are only one of you, and only one of me.
X: You’re selfish! (This actually made more sense in the real conversation than it does here. This was some time ago and my memory has faded.)
Me: (I exited the conversation at this point. I don’t remember how.)
I exited because I judged that X was making something he perceived to be an ad-hominem argument, and I knew that X knew that ad-hominem arguments were fallacious, and I couldn’t deal with the apparent dishonesty. It is actually true that I am selfish, in the sense that I acknowledge no authority over my behavior higher than my own preferences. This isn’t so bad given that some of my preferences are that other people get things they probably want. Today I’m not sure X was intending to make an ad-hominem argument. This alternative for my last step would have been better:
Me if I were in touch with my inner moralizer: Do I correctly understand that you are trying to make an ad-hominem argument?
If I had taken that path, I would either have clear evidence that X is dishonest, or a more interesting conversation if he wasn’t; either way would have been better.
When I visualize myself taking the alternative I presently prefer, I also imagine myself stepping back so I would be just out of X’s reach. I really don’t like physical confrontation.
My original purpose here was give an example, but the point at the end is interesting: if you’re going to denounce, there’s a small chance that things might escalate, so you need to get clear on what you want to do if things escalate.
Me: Well, you’re human, so I don’t think you can really have concerns about what happens a billion years from now because you can’t imagine that period of time.
In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?
In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?
I have a really poor intuition for time, so I”m the wrong person to ask.
I can imagine a thousand things as a 10x10x10 cube. I can imagine a million things as a 10x10x10 arrangements of 1K cubes. My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can’t imagine a billion years.
In order to have desires about something, you have to have a compelling internal representation of that something so you can have a desire about it.
X didn’t say “I can too imagine a billion years!”, so none of this pertains to my point.
My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can’t imagine a billion years.
Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?
A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)
Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.
It takes 1 billion of those millimetre cubes to fill that volume.
Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.
Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.
A bigger problem I have with the original is where X says “It’s really important to me what happens to the species a billion years from now.” The species, a billion years from now? That sounds like a failure to comprehend just what a billion years is: the time that life has existed on Earth so far. I confidently predict that a billion years hence, not a single presently existing species, including us, will still exist in anything much like its present form, even imagining “business as usual” and leaving aside existential risks and singularities.
First, I imagine a billion bits. That’s maybe 15 minutes of high quality video, so it’s pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year—for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.
That’s maybe 15 minutes of high quality video, so it’s pretty easy to imagine a billion bits.
Perhaps I don’t understand your usage of the word ‘imagine’ because this example doesn’t really help me ‘imagine’ them at all. Imagine their result (the high quality video) sure, but not the bits themselves.
Well, you’re human, so I don’t think you can really have concerns about what happens a billion years from now because you can’t imagine that period of time.
I can’t imagine the difference between sixteen million dollars and ten million dollars—in my imagination, the stuff I do with the money is exactly the same. I definitely prefer 16 to 10 though. In much the same way, my imagination of a million dollars and a billion dollars doesn’t differ too much; I would also prefer the billion. I don’t know if I need to imagine a billion years accurately in order to prefer it, or have concerns about it becoming less likely.
Agreed, although I don’t know that I have any Asperger’s. Here’s a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer.
One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.
I suspect the inner moralizer would also probably not treat the “You’re selfish” as an ad hominem argument. It technically does apply but from within a moral model what is going on isn’t of the form of the ad hominem fallacy. It is more of the form:
Not expressing and expecting others to express a certain moral position is bad.
You are bad.
You should fear the social consequences of being considered bad.
You should change your moral position.
I’m not saying the above is desirable reasoning—it’s annoying and has its own logical probelms. But it is also a different underlying mistake than the typical ad hominem.
One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.
If it works that way, I don’t want it. My relationship with X has no value to me if the relevant truths cannot be told, and so far as I can tell that first paragraph was both true and relevant at the time.
Now if that had been a coworker with whom I needed ongoing practical cooperation, I would have made some minimal polite response just like I make minimal polite responses to statements about who is winning American Idol.
...But it is also a different underlying mistake than the typical ad hominem.
Okay, there might be some detailed definition of ad hominem that doesn’t exactly match the mistake you described. I presently fail to see how the difference is important. The purpose of both ad hominem and your offered interpretation is to use emotional manipulation to get the target (me in this example) to shut up. Would I benefit in some way from making a distinction between the fallacy you are describing and ad hominem?
Could you be more specific? Is the “inner moralizer”, as opposed to, say, “inner consequentialist”, a virtue by the human condition (by how the brain is wired), or is it “objectively good solution given limited cognitive resources”? Is your statement rather about humans, or rather about moralization?
I am still thinking this through. It’s a very subtle topic. But having begun to think about it, the sheer number of arguments that I have found (which are in favor of preserving and employing the moral perspective) encourages me to believe that I was right—I’m just not sure where to place the emphasis! Of course there is such a thing as moral excess, addiction to moralizing, and so forth. But eschewing moral categories is psychologically and socially utopian (in a bad sense), the intersubjective character of the moral perspective has a lot going for it (it’s cognitively holistic since it is about whole agents criticizing whole agents; you can’t forgive someone unless you admit that they have wronged you; something about how you can’t transcend the moral perspective, in the attractive emotional sense, unless you understand it by passing through it)… I wouldn’t say it’s just about computational utility.
I must clarify that I’ve been concerned with contrasting the function of moralization, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.
On the topic of the “poisonous pleasure” of moralistic critique:
I am struck by the will to emotional neutrality which appears to exist among many “aspies”. It’s like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the “emotional games”, and they refuse to resist in the usual way when those games are directed against them—the usual form of defense being a counterattack—because that would make them just as bad as the aggressor normals.
For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason—that being able to fight back is empowering—but because it’s actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.
If the message you intend to send is “I am secure in my status. The attacker’s pathetic attempts at reducing my status are beneath my notice.”, what should you do? You don’t seem to think that ignoring the “attacks” is the correct course of action.
This is a genuine question. I do not know the answer and I would like to know what others think.
I think the real message is “The attacker’s attempt to reduce my status is too ineffective to need a response”.
On a good day I’d say “okay” so he knows I heard him, and then start a conversation with someone else, unless there’s some instrumental value in confronting him or continuing the conversation given that I now know he’s playing status games. I don’t know a good way to carry on a useful conversation with someone who is playing status games, so I’m stuck in that situation too.
Sarcasm.
Ignoring the attempts is a good default. It gives a decent payoff while being easy to implement. More advanced alternatives are the witty, incisive comeback or the smooth, delicately calibrated communication of contempt for the attacker to the witnesses. In the latter case especially body language is the critical component.
My opinion? I’d not lie. You’ve noticed the attempt, why claim you didn’t? Display your true reaction.
Noticing the attempt and doing nothing is not a lie. It is a true reaction.
I’m referring to that. Sending that message is an implicit lie—well, you could call it a “social fiction”, if you like a less loaded word.
It is also a message that is very likely to be misunderstood (I don’t yet know my way around lesswrong well enough to find it again, but I think there’s an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perception of what you said).
So if your true reaction is “you are just trying to reduce my status, and I don’t think it’s worth it for me to discuss this further”, my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.
I hope I was able to clarify my distinction between having a true reaction, and displaying it. In a nutshell, if you notice something, you have a reaction, and by not displaying it (when it is expected of you), you create an ambiguous situation that is not likely to communicate to the other person what you want it to communicate.
implicit lie vs. social fiction
I don’t think these are normally useful ways of thinking about status posturing. Verbalising this stuff is a faux pas in the overwhelming majority of human social groups.
I’m not sure if I disagree with you on whether the message is “very likely” to be understood. In my limited experience, and with my below average people reading skills, I’d say that most status jockeying in non-intimate contexts is obvious enough for me to notice if I’m paying attention to the interaction.
The post you meant is probably Illusion of Transparency. I contend that it applies less strongly to in person status jockeying than to lingual information transfer. I suggest you watch a clip of a foreign language movie if you disagree.
Yes, that’s the post I was referring to. Thank you!
This can work sometimes but it in most contexts it is difficult to pull off without sounding awkward or crude. At best it conveys that you are aware that social dynamics exist but aren’t quite able to navigate them smoothly yet. Mind you unless there is a pre-existing differential in status or social skills in their favour they will tend to come off slightly worse than you in the exchange. A costly punishment.
Mitchell, yes, that was me back in high school. But IIRC I thought I was doing this.
You don’t need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them. If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider. You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.
Then why didn’t humans evolve to perform rational calculations of whether retaliation is cost-effective instead of uncontrollable rage? The answer, of course, is largely in Schelling. The propensity to lose control when enraged is a strategic precommitment to lash out if certain boundaries are overstepped.
Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I’d say that in most situations in which it enters the strategic calculations it’s still greatly beneficial.
I agree, or at least agree for situations where people are in their native culture or one they’re intimately familiar with, so that they’re relatively well-calibrated. What I wrote was poorly phrased to the point of being wrong without lawyerly cavilling.
To rephrase more carefully; you can act in a manner that gets the same results as anger without being angry. You can have a better, more strategic response. I’m not claiming it’s easy to rewire yourself like this, but it’s possible. If your natural anger response is anomalously low, as is the case for myself and many others on the autism spectrum, and you’re attempting some relatively hardcore rewiring anyway, why not go for the strategic analysis instead of trying to decrease your threshold for blowing up?
I’m not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent’s incentives to create these conditions, so if the strategy works, you don’t actually have to perform the irrational act, which remains just a counterfactual threat.
In particular, if you enter confrontations only when it is cost-effective to do so, this may leave you vulnerable to a strategy that maneuvers you into a situation where surrender is less costly than fighting. However, if you’re precommitted to fight even irrationally (i.e. if the cost of fighting is higher than the prize defended), this makes such strategies ineffective, so the opponent won’t even try them.
So for example, suppose you’re negotiating the price you’ll charge for some work, and given the straightforward cost-benefit calculations, it would be profitable for you to get anything over $10K, while it would be profitable for the other party to pay anything under $20K, so the possible deals are in that range. Now, if your potential client resolutely refuses to pay more than $11K, and if it’s really impossible for you to get more, it is still rational for you to take that price rather than give up on the deal. However, if you are actually ready to accept this price given no other options, this gives the other party the incentive to insist with utter stubbornness that no higher price is possible. On the other hand, if you signal credibly that you’d respond to such a low offer by getting indignant that your work is valued so little and leaving angrily, then this strategy won’t work, and you have improved your strategic position—even though getting angry and leaving is irrational assuming that $11K really is the final offer.
(Clearly, the strategy goes both ways, and the buyer is also better off if he gets “irrationally” indignant at high prices that still leave him with a net plus. Real-life negotiations are complicated by countless other factors as well. Still, this is a practically relevant example of the basic principle.)
Now of course, an ideally rational agent with a perfect control of his external behavior would play the double game of signaling such precommitment convincingly but falsely and yielding if the bluff is called (or perhaps not if there would be consequences on his reputation). This however is normally impossible for humans, so you’re better off with real precommitment that your emotional propensity to anger provides. Of course, if your emotional propensities are miscalibrated in any way, this can lead to strategic blunders instead of benefits—and the quality of this calibration is a very significant part of what differentiates successful from unsuccessful people.
I agree with what you are saying and would perhaps have described it as “ways that would otherwise have been irrational”.
I obviously need to work on phrasing things more clearly.
Anger functions as a strategic precommitment which improves your bargaining position. Two examples of a precommitment would be as follows (1) A car buyer going to a dealership with a contract stating that for every dollar they pay over a predetermined price (manufacturers price plus average industry margin presumably) they must pay ten dollars to some other party (who can credibly hold them to it). (2) Destroying your means of retreat when you plan aggression against another party, so that you have no motive to hold anything back, like Cortes did when he burned his ships upon landing in Mexico.
Now (1) is more like anger than (2) is because it’s a public signal, but both of them reduce your options to strengthen your position, (1) in a negotiation, (2) as a committed, cohesive group. (1) is very much like throwing the steering wheel out the window in the game of chicken. Pretending your hands are tied and you can’t go above/below the stated price without going further up the chain of command is actually one of those negotiating tricks that are in all the books, like the car salesman who goes “Oh, I’m not sure; I’ll have to consult my boss” and smokes a cigarette in the office before coming back and agreeing to a lower price.
Swimmer963 asked me:
and I replied
which I think shows at least a weak grasp of how these precommitments can work; one builds a reputation, and given that we’re meatbags with malleable conceptions of self, a reason to make such precommitments even when they cannot effect our reputation.
If “normally impossible” means very, very hard I agree completely; robust self-behavioural modification is hard even for small things, never mind for something as difficult to bring into conscious awareness or control as anger.
Would you consider expanding upon quality of calibration?
Yes, I think we understand each other now. Funny, I had the “must consult my boss” trick pulled on me just a few days ago by a guy whom I called up to haul off some trash. I still managed to make him lower the supposedly boss-mandated price by about 20%. (And when I later thought about the whole negotiation more carefully, I realized I could have probably lowered it much more.)
Regarding the quality of calibration, it’s straightforward. Emotional reactions can serve as strategic precommitments the way we just discussed, and often they also serve as decision heuristics in problems where one lacks the necessary information and processing power for a conscious rational calculation. In both cases, they can be useful if they are well-calibrated to produce strategically sound actions, but if they’re poorly calibrated, they can lead to outright irrational and self-destructive behavior.
So for example, if you fail to feel angry indignation when appropriate, you’re in danger of others maneuvering you into a position where they’ll treat you as a doormat, both in business and in private life. On the other hand, if such emotions are triggered too easily, you’ll be perceived as short-tempered, unreasonable, and impossible to deal with, again with bad consequences, both professional and private.
It seems to me that the key characteristic that distinguishes high achievers is the excellent calibration of their emotional reactions—especially compared to people who are highly intelligent and conscientious and nevertheless have much less to show for it.
No; but it certainly makes it likelier that you will bring yourself to action.
If you’re not angry, what would motivate you to do any of those things? If someone injures me in some way or takes something that I wanted, usually neither hitting them nor spreading gossip about them will in any way help me repair my injury or get back what they took from me. So I don’t. Unless I’m angry, in which case it kind of just happens, and then I regret it because it usually makes the situation worse.
Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you’re serious about something. This seems to be particularly true when dealing with people who aren’t inclined to use more ‘intellectual’ communication methods.
I think you’re right. Mind you as someone who is interested in communication that doesn’t involve control via strong emotional responses I most definitely don’t reward bad behaviour by giving the other what they want. This applies especially if they use the aggressive tactics of the kind mentioned here. I treat those as attacks and respond in such a way as to discourage any further aggression by them or other witnesses.
This is not to say I don’t care about the other’s experience or desires, nor does it mean that a strong emotional response will rule out me giving them what they want. If the other is someone that I care about I will encourage them towards expressions that actually might work for getting me to give them what they want. I’ll guide them towards asking me for something and perhaps telling me why it matters to them. This is more effective than making demands or attempting to emotionally control.
I’m far more generous than I am vulnerable to dominance attempts and I’m actually willing to consciously make myself vulnerable to personal requests to just behind the line of being an outright weakness because I have a strong preference for that mode of communication. Mind you even this tends to be strongly conditional on a certain degree of reciprocation.
Point being that I agree with the sometimes qualifier; the benefit to such displays (genuine or otherwise) is highly variable. We also have the ability to influence whether people make such displays to us. Partly by the incentive they have and partly by simple screening.
Seems true. Nevertheless I’ve never used it in this way. This may have more to do with my personality than anything: from what I’ve read here, I’m more of a conformist than the average Less Wrong reader, and I put a higher value on social harmony. I hate arguments that turn personal and emotional.
I might hit someone because they’re pointing a gun at me and I believe hitting them is the most efficient way to disarm them. I might hit someone because they did something dangerous and I believe hitting them is the most efficient way to condition them out of that behavior. I might spread gossip about them because they are using their social status in dangerous ways and I believe gossiping about them is the best available way of reducing their status.
None of those cases require anger, and they might even make the situation better. (Or they might not.)
Or, less nobly, I might hit someone because they have $100 I want, and I think that’s the most efficient way to rob them. I might spread gossip about them because we’re both up for the same promotion and I want to reduce their chance of getting it.
None of those cases require anger, either. (And, hey, they might make the situation better, too. Or they might not.)
I suppose the context of my comment was limited to a) me personally (I don’t have any desire to steal money or reduce other people’s chances of promotion) and b) to the situations I have encountered in the past (no guns or danger involved). Your points are very valid though.
If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it’s more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you.
Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.
Point taken. I am a doormat. People have told me this over and over again, so I probably have a reputation as a doormat, but that has certain value in itself; I have a reputation as someone who is dependable, loyal, and does whatever is asked of me, which is useful in a work context.
Can you be more specific? What exactly are the dangers of neutralizing our “inner moralizers”?
Also, see my previous comments, which may be applicable here. I speculate that “aspies” free up a large chunk of the brain for other purposes when they ignore “emotional games”, and it’s not clear to me that they should devote more of their cognitive resources toward such games.
Having brought up this topic, I find that I’m reluctant to now do the hard work of organizing my thoughts on the matter. It’s obvious that the ability to moralize has a tactical value, so doing without it is a form of personal or social disarmament. However, I don’t want to leave the answer at that Nietzschean or Machiavellian level, which easily leads to the view that morality is a fraud but a useful fraud, especially for deceptive amoralists. I also don’t want to just say that the human utility function has a term which attaches significance to the actions, motives and character of other agents, in such a way that “moralizing” is sometimes the right thing to do; or that labeling someone as Bad is an efficient heuristic.
I have glimpsed two rather exotic reasons for retaining one’s capacity for “judging people”. The first is ontological. Moral judgments are judgments about persons and appeal to an ontology of persons. It’s important and useful to be able to think at that level, especially for people whose natural inclination is to think in terms of computational modules and subpersonal entities. The second is that one might want to retain the capacity to moralize about oneself. This is an intriguing angle because the debate about morality tends to revolve around interactions between persons, whether morality is just a tool of the private will to power, etc. If the moral mode can be applied to one’s relationship to reality in general (how you live given the facts and uncertainties of existence, let’s say), and not just to one’s relationship to other people, that gives it an extra significance.
The best answer to your question would think through all that, present it in an ordered and integrated fashion, and would also take account of all the valid reasons for not liking the moralizing function. It would also have to ground the meaning of various expressions that were introduced somewhat casually. But—not today.
In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an “antidote” to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can’t be decided rationally.)
If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that “the other guy doesn’t deserve the money if he doesn’t work for it”, and from that moral strongpoint refuse to cooperate. Now if the rationalist knows he’s working with a moralist, he’ll also know that his immoral strategy won’t work, so he won’t attempt it in the first place—a victory for the moralist in a conflict that hasn’t even occurred (in fact, the moralist need never know that the rationalist intended to cheat him).
This is different from simply acting irrationally in that the moralist’s reaction remains predictable.
So it is possible that moral indignation helps me to prevent other people from manouevering me into a position where I don’t want to be.
Seems like morality is (inter alia) a heuristic for improving one’s bargaining position by limiting one’s options.
It occurs to me that I’m not less judgmental than the typical human, just judgmental in a different way and less vocal about it (except in the “actions speak louder than words” sense). My main judgement of a person is just whether it is worth my time to talk to / work with / play with / care about that person, and if my “inner moralizer” says no, I simply ignore or get away from them. I’m not sure if I can be considered an “aspie” but I suspect many of them are similar in this way.
Compared to what’s more typical, this method of “moralizing” seems to have all of the benefits you listed (except the last one, “If the moral mode can be applied to one’s relationship to reality in general”, which I don’t understand) but fewer costs. It is less costly in mental resources, and less likely to get you involved in negative-sum situations. I note that it wouldn’t have worked well in an ancestral environment where you lived in a small tribe and couldn’t ignore or get away from others freely, which perhaps explains why it doesn’t come naturally to most people despite its advantages.
See the comments here on the psychological meaning of “kingship”. That’s one aspect of the “relationship to reality” I had in mind. If you subtract from consideration all notions of responsibility towards other people, are all remaining motivations fundamentally hedonistic in nature, or is there a sense in which you could morally criticize what you were doing (or not doing), even if you were the only being that existed?
There is a tendency, in discussions here and elsewhere about ethics, choice, and motivation, either to reduce everything to pleasure and pain, or to a functionalist notion of preference which makes no reference to subjective states at all. Eliezer advocates a form of moral realism (since he says the word “should” has an objective meaning), but apparently the argument depends on behavior (in the real world, you’d pull the child on the train tracks out of harm’s way) and on the hypothesized species-universality of the relevant cognitive algorithms. But that doesn’t say what is involved in making the judgment, or in making the meta-judgment about how you would act. Subjectively, are we to think of such judgments as arising from emotional reactions (e.g. basic emotions like disgust or fear)? It leaves open the question of whether there is a distinctive moral modality—a mode of perception or intuition—and my further question would be whether it only applies to other people (or to relations between you the individual and other people), or whether it can ever apply to yourself in isolation. In culture, I see a tendency to regard choices about how to live (that don’t impact on other people) as aesthetic choices rather than ethical choices.
Mostly I have questions rather than answers here.
WIth Aspies it’s probably less that they won’t take part in emotional games, than can’t.
I’m not sure I’m correctly interpreting what you’re referring to here. Could you give a concrete example?
The Zen thing to do would be to flame you with absurd viciousness for being excessively vague in your own request for clarification, in the hope that your response would be combative (rather than purely analytical), but still appropriate—because then you would have provided the example yourself. But that’s a high-risk conversational strategy. :-)
Can you be more specific?
Agreed, although I don’t know that I have any Asperger’s. Here’s a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer. I didn’t record it, so it’s paraphrased from memory:
X: It’s really important to me what happens to the species a billion years from now. (X actually made a much longer statement, with examples.)
Me: Well, you’re human, so I don’t think you can really have concerns about what happens a billion years from now because you can’t imagine that period of time. It seems much more likely that you perceive talking about things a billion years off to be high status, and what you really want is the short term status gain from saying you have impressive plans. People aren’t really that altruistic.
X: I hate it when people point out that there are two of me. The status-gaming part is separate from the long-term planning part.
Me: There are only one of you, and only one of me.
X: You’re selfish! (This actually made more sense in the real conversation than it does here. This was some time ago and my memory has faded.)
Me: (I exited the conversation at this point. I don’t remember how.)
I exited because I judged that X was making something he perceived to be an ad-hominem argument, and I knew that X knew that ad-hominem arguments were fallacious, and I couldn’t deal with the apparent dishonesty. It is actually true that I am selfish, in the sense that I acknowledge no authority over my behavior higher than my own preferences. This isn’t so bad given that some of my preferences are that other people get things they probably want. Today I’m not sure X was intending to make an ad-hominem argument. This alternative for my last step would have been better:
Me if I were in touch with my inner moralizer: Do I correctly understand that you are trying to make an ad-hominem argument?
If I had taken that path, I would either have clear evidence that X is dishonest, or a more interesting conversation if he wasn’t; either way would have been better.
When I visualize myself taking the alternative I presently prefer, I also imagine myself stepping back so I would be just out of X’s reach. I really don’t like physical confrontation.
My original purpose here was give an example, but the point at the end is interesting: if you’re going to denounce, there’s a small chance that things might escalate, so you need to get clear on what you want to do if things escalate.
In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?
I have a really poor intuition for time, so I”m the wrong person to ask.
I can imagine a thousand things as a 10x10x10 cube. I can imagine a million things as a 10x10x10 arrangements of 1K cubes. My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can’t imagine a billion years.
In order to have desires about something, you have to have a compelling internal representation of that something so you can have a desire about it.
X didn’t say “I can too imagine a billion years!”, so none of this pertains to my point.
Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?
A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)
Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.
It takes 1 billion of those millimetre cubes to fill that volume.
Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.
Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.
A bigger problem I have with the original is where X says “It’s really important to me what happens to the species a billion years from now.” The species, a billion years from now? That sounds like a failure to comprehend just what a billion years is: the time that life has existed on Earth so far. I confidently predict that a billion years hence, not a single presently existing species, including us, will still exist in anything much like its present form, even imagining “business as usual” and leaving aside existential risks and singularities.
Excellent. I can visualize a billion now. Thank you.
First, I imagine a billion bits. That’s maybe 15 minutes of high quality video, so it’s pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year—for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.
Perhaps I don’t understand your usage of the word ‘imagine’ because this example doesn’t really help me ‘imagine’ them at all. Imagine their result (the high quality video) sure, but not the bits themselves.
I can’t imagine the difference between sixteen million dollars and ten million dollars—in my imagination, the stuff I do with the money is exactly the same. I definitely prefer 16 to 10 though. In much the same way, my imagination of a million dollars and a billion dollars doesn’t differ too much; I would also prefer the billion. I don’t know if I need to imagine a billion years accurately in order to prefer it, or have concerns about it becoming less likely.
One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.
I suspect the inner moralizer would also probably not treat the “You’re selfish” as an ad hominem argument. It technically does apply but from within a moral model what is going on isn’t of the form of the ad hominem fallacy. It is more of the form:
Not expressing and expecting others to express a certain moral position is bad.
You are bad.
You should fear the social consequences of being considered bad.
You should change your moral position.
I’m not saying the above is desirable reasoning—it’s annoying and has its own logical probelms. But it is also a different underlying mistake than the typical ad hominem.
If it works that way, I don’t want it. My relationship with X has no value to me if the relevant truths cannot be told, and so far as I can tell that first paragraph was both true and relevant at the time.
Now if that had been a coworker with whom I needed ongoing practical cooperation, I would have made some minimal polite response just like I make minimal polite responses to statements about who is winning American Idol.
Okay, there might be some detailed definition of ad hominem that doesn’t exactly match the mistake you described. I presently fail to see how the difference is important. The purpose of both ad hominem and your offered interpretation is to use emotional manipulation to get the target (me in this example) to shut up. Would I benefit in some way from making a distinction between the fallacy you are describing and ad hominem?
Could you be more specific? Is the “inner moralizer”, as opposed to, say, “inner consequentialist”, a virtue by the human condition (by how the brain is wired), or is it “objectively good solution given limited cognitive resources”? Is your statement rather about humans, or rather about moralization?
I am still thinking this through. It’s a very subtle topic. But having begun to think about it, the sheer number of arguments that I have found (which are in favor of preserving and employing the moral perspective) encourages me to believe that I was right—I’m just not sure where to place the emphasis! Of course there is such a thing as moral excess, addiction to moralizing, and so forth. But eschewing moral categories is psychologically and socially utopian (in a bad sense), the intersubjective character of the moral perspective has a lot going for it (it’s cognitively holistic since it is about whole agents criticizing whole agents; you can’t forgive someone unless you admit that they have wronged you; something about how you can’t transcend the moral perspective, in the attractive emotional sense, unless you understand it by passing through it)… I wouldn’t say it’s just about computational utility.
I must clarify that I’ve been concerned with contrasting the function of moralization, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.