There is some ineffable something in those who are distinctly uncooperative with requests to define morality or otherwise have a rational discussion on the matter, both here and on all forums where I’ve discussed morality, and I think you’ve hit on what that something is. It is the fear of nihilism, the fear that without their moral compass they might suddenly want to do evil, deplorable things because they’d be A-okay.
What they don’t see, in my opinion, is that it is their very dread at such a possibility that is really what is keeping them from doing those things. “Morality” provides no additional protection; it merely serves as after-the-fact justification of the sentiments that were already there.
We don’t cringe at the thought of stealing from old ladies because it’s wrong, but rather we call it wrong to steal from old ladies because we cringe at the thought.
We don’t cringe at the thought of stealing from old ladies because it’s wrong, but rather we call it wrong to steal from old ladies because we cringe at the thought
This is crisp, clear, and one of the best short explanations of the issue I’ve read.
Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
(ETA: The point being that I agree with the parent and grandparent posts that people who won’t rationally discuss morality are often afraid of things like this. I’m just wondering whether the belief underlying that fear is true or false.)
On a trivial scale, I’ve revised quite a few opinions based on objective rational arguments that my action was causing harm in ways I had previously been unaware of. The example that immediately comes to mind is modifying my vocabulary to try and avoid offensive words. The concept of privilege and “isms” in general, really.
Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure. If everyone obeys the same morality, they should be more ready to defend it, because they know they will be in majority.
Without a shared morality, and it’s twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their
cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the
group can help organize efficient peer pressure.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists. Convincing B is not particularly important, since effective peer pressure merely requires having enough people on board and not having any particular individual on board. In those conversations, I always had the role of B, and I assumed, perhaps mistakenly, that A’s primary goal was to persuade me since A was talking to me. Thank you for the insight.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists.
“Any means possible” is a euphemism for “really big stick”!
Without a shared morality, and it’s twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.
Hm. It seems like there’s more to say about that.
For example, the peer pressure to participate in picking on low-status figures in a high-school class certainly appears to be strong, and not difficult to organize—indeed, it occurs spontaneously.
I suppose I’m willing to accept that those who refuse to participate aren’t “wrongdoers”, but I’m not sure why that should matter; if there’s a distinction between wrongdoers and other norm-violators you are calling out here, it would benefit from being called out more explicitly.
Conversely, I’m also willing to accept that picking on the low-status figures is the shared morality in this case, but in that case I think the whole conversation becomes less connotationally misleading if we talk about shared behavioral norms and leave the term “morality” (let alone “objective morality”) out of it.
I would say that “becoming strong and opressing the weak” is the default goal. You don’t need any kind of morality here, it’s just biology of a social species. Being strong has natural rewards.
Morality is what allows you to have alternative goals. Morality means that “X is important too”, sometimes even more important than being strong (though usually it is good to both be strong and do X). Morality gives you social rewards for doing X.
Being strong is favored by genes, doing X is favored by (X-promoting) memes. In the absence of memes (more precisely in absence of strong memes saying what is right and wrong), humans fall back on their natural social behavior, the pecking order. In the presence of such memes, humans try to do X; and also at the same time secretly try to be strong, but they cannot use too obvious means for that.
Technically, we could call the pecking order a “null morality”; like the “null hypothesis” in statistics.
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too. Because life is full of iterated prisoner’s dilemma, because gene survival requires the survival of your close relatives, because of the way the brain is shaped (like the fact the empathy very likely comes, at least in part, from the way we reuse our own brain circuits to predict the behavior of others).
Moral theories are “artificial constructs”, as are all theories. They are generalization, they are abstraction, they can conflict with the “genetic morality”, and yes, memes play a huge role in morality. But the core of morality comes from our genes—care for our family, “tit-for-tat with initial cooperation” as the winning strategy for IPD, empathy, …
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too.
Even if ultimately everything comes from the genes, we have to learn some things, while other things come rather automatically.
We educate children to behave nicely to others—they don’t get this ability automatically just because of their genes. On the other hand, children are able to create “Lord of the Flies”-like systems at school without being taught so. Both behaviors are based on evolution, both promote our genes in certain situations, but still one is the default option, and the other must be taught (is transferred by memes).
And by the way, Prisonners’ Dilemma is not a perfect model of reality, and the differences are very relevant for this topic. Prisonners’ Dilemma or Iterated Prisonners’ Dilemma are modelled as series of 1:1 encounters, where the information remains hidden between the interacting players; each player tries to maximize their own utility; and each encounter is scored independently. In real life, people observe what others are doing even when interacting with others; people have families and are willing to sacrifice some of their utility to increase their family’s utility; and results of one encounter may influence your survival or death, your health, your prestige etc., which influence the rules of the following encounter. This results in new strategies, such as “signal a membership to a powerful group G, play tit-for-tat with initial cooperation against members of G, and defect against everyone else” which will work if the group G has a majority. Now the problem is how will people agree what is the right group G? In small societies, family can be such group; in larger societies memetic similarity can play the same role—if you consider that humans are not automatically strategic, why not make a meme M, which teaches them this strategy and at the same times defines group G as “people who share the meme M”? Here comes the morality, religion, football team fans, et cetera.
I would certainly agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult. I would also agree that there always tends to be some set of behavioral norms, often several conflicting sets, some of which we may not want to label “morality”.
It is not clear to me that the distinction you want to draw between “natural” and “alternative” norms is quite as clearcut as you make it sound. Nor is it clear to me that that distinction maps quite as readily to genetic vs. cultural factors as you imply here.
But I would certainly agree that some norms are more easily arrived at (that is, require less extensive training to impart) than others, and that in the absence of strong enforcement of the harder to impart norms (what you’re describing as “alternative goals”/”morality” propogated by memes) the easier-to-impart ones (what you describe as “natural” and genetically constrained) tend to influence behavior more.
I guess my comment seems too dichotomic; I did not intend it that way. Basicly I wanted to say that if you have e.g. children without proper upbringing (or in an environment that allows them to act against their upbringing), their behavior easily collapses to something most dramatically described in the “Lord of the Flies” book, which is rather similar to what social animals do: establishing group hierarchy by using intra-group violence and threats. I call it “natural” because this is what happens unless people use some strategy to prevent it.
But of course both building the pecking order and the desire to avoid the negative consequences for people at the bottom are natural, i.e. driven by self-interest of our genes; it’s just that the former is easier to do, while the latter requires some thinking, strategy, coordination, infrastructure (laws, police, morality, religion, etc.) to be done successfully. It feels like worth doing, but it can be done in a few different ways, and we often disagree about the details.
It’s like in the Prisonners’ Dilemma, the choice to defect is in short term (1 turn) always better than to cooperate; and if you imagine an agent without a memory or unable to distinguish between individual players, then in a world consisting of such agents, always defecting would be the winning strategy. Only the possibility to remember and iterate allows the strategy of punishment, and now “tit-for-tat with initial cooperation” becomes a successful implementation of the more general principle “cooperate with those who cooperate with you, and punish those who defect”.
But in real life sometimes those who can punish are different from those who have been harmed. For example, if someone steals from you, you will try to punish them—but for a society without theft, it is necessary that people punish even those who stole from someone else. (Otherwise the thieves would just have to carefully select their targets among the weaker people.) Here we have a problem, because engaging in punishment has some costs (if you see someone stealing and try to stop them, the thief may hurt you) and no direct benefit for the punisher. This can be fixed by a system when people are rewarded for punishing those who have harmed someone else. For such system to work, it is necessary to have an agreement about what is harm, what is the proper punishment, and what is the reward (social esteem for the hero, salary for policeman, etc.). This is difficult to organize.
Yes, I continue to agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult, and that some norms are easier to coordinate (more natural, if you like) than others.
And yes, the dichotomy is part of what I’m skeptical about. Even in a “pecking order” environment, for example, I suspect a norm saying that low-status tribe members don’t get to steal from high-status members is relatively easy to coordinate. That’s not the same as my culture’s notion of theft, but neither is it the same as a complete absence of a notion of theft. I suspect it’s much more of a continuum, and much more variable, than you make it sound.
I agree there is a continuum of possibilities, that’s how the things were developed. But it does not mean that all parts of the continuum exist in reality with the same frequency, or even that the frequency is a monotonous function.
I guess I have troubles explaining what I mean, so I will use a metaphor—computer. You can have no computer. You can use fingers. You can use pebbles. You can use abacus. You can have a mechanical calculator, vacuum-tube calculator, or some kind of integrated-circuit computer. It’s not literally a continuum, but there are many steps. But now make a histogram of how often people around you use this or that… and you will probably find that most people use some integrated-circuit computing machine, or nothing. There is very little in between. So in theory, there is a continuum, but it can be approximated as just having two choices: an integrated-circuit computer, or no computing machine. There is very little incentive to use abacus, or even to invent one. You don’t upgrade from “no calculator” to “integrated-circuit calculator” by discovering abacus etc., you just go to shop to buy one. And even those people who design and build integrated-circuit calculators, they don’t start from abacus. This part in the middle does not exist anymore, because compared with both extremes, it is not cost-effective.
It’s not the same with morality, but my point is that there is so much morality around (it feels kind of funny when I write it), that very few people are inventing the morality from scratch. You copy it, or you ignore it; or you copy some parts, or you copy it and forget some parts. Inventing it all in one lifetime is almost impossible. So to me it seems safe to say that the higher levels must be carried by memes. It’s like saying that you can find pebbles or invent abacus, but you have to buy an integrated-circuit computer, unless you are an exceptional person.
I agree with you that very few behavioral norms are invented from scratch, and that the more complex ones pretty much never are, and that they must therefore be propagated culturally.
That said, your analogy is actually a good one, in that I have the same objection to the analogy that I had to the original.
Unlike you, I suspect that there’s quite a lot of in between: some people use integrated-circuit computers, some people (often the same people) use pen and paper, some people use a method of successive approximation, some people count on their fingers. It depends on the people and it depends on the kind of calculation they are doing and it depends on the context in which they’re doing it; I might open an excel spreadsheet to calculate 15% of a number if I’m sitting in front of my computer, I might calculate it as “a tenth plus half of a rounded-up tenth” if I’m working out a tip at a restaurant, I might solve it with pencil and paper if it’s the tenth in a series of arithmetic problems I’m solving on a neuropsych examination.
When you say “most people use some integrated-circuit computing machine, or nothing” you end up excluding a wide range of actual human behavior in the real world.
Analogously, I think that when you talk about the vast excluded middle between “morality” and “pecking order” you exclude a similarly wide range of actual human behavior in the real world.
When that range is “approximated as just having two choices” something important is lost. If you have some specific analytical goal in mind, perhaps the approximation is good enough for that goal… I’m afraid I’ve lost track of what your goal might be, here. But in general, I don’t accept it as a good-enough approximation; the excluded middle seems worthy of consideration.
There are occasional religious conversions, and if you follow the thread up to the link below I apparently got “syllogism” to give up on preference utilitarianism, whatever that is:
I suspect what people are afraid of s being caught out in holding an unarguable position
Both hypotheses make sense to me: perhaps they’re afraid that it won’t work to persuade people if they don’t defend it, and perhaps it’s simpler and they know they have no position to argue from but they still don’t want to lose.
For better or worse, I think Eugeine Nier was arguing a point about morality identical to one of yours (Peterdjones) and he started dodging my questions at:
If you are aware that people are dodging because they have an unarguable position, perhaps you don’t want to participate in that. Do you want to help him out and answer the question I asked there?
I’m inclined to believe you, but his biography on Wikipedia describes a long and varied life, and in a few minutes of examination I did not find any clear examples of arguments about morality persuading anybody to stop doing evil. I’m sure it’s in there somewhere. Which event(s) in his life are you talking about?
Sorry, if that’s all you have, it’s not what I’m looking for. What evil did he stop doing because he converted to Christianity? The worst things I see in the biography there were teaching rhetoric and waiting patiently for his 11 year old fiancee to turn 13 so he could marry her. Those activities were both consistent with cultural norms of the time. Neither of seem to have the right flavor to make me to want to try arguing morality with someone who is pointing a gun at my head.
He also gave up sleeping with various mistresses, however, given current culture, I doubt you think that is evil.
Arguing morality with someone who is holding a gun to your head doesn’t sound like a very smart thing to do. The most I have done while being held up was provide the assailant a set of scriptures with a number to call if he wanted to discuss morality while not holding a gun. If they are holding a gun or otherwise threatening current violence to you then that is usually not the time to be discussing morality as they are most likely not acting rationally.
Discussing morality with someone that is suicidal can sometimes help. Still, one should call for professional assistance if it is available.
One problem with arguing rationality with someone who as a gun to your head is time: a rational argument for a substantial change tends to take a fair amount of time. You might be able to convince someone with quick “sound bites”, but I’m not sure I’d really call that a rational argument.
We don’t cringe at the thought of stealing from old ladies because it’s wrong, but rather we call it wrong to steal from old ladies because we cringe at the thought.
I think this analysis oversimplifies the fear of nihilism. For example, we punish those who steal from old ladies, because the stealing is wrong. But we don’t punish everyone whose behavior makes us cringe. At least I hope not.
People who fear moral nihilism are not worried about losing control over their own behavior, they are worried about losing control over other people’s behavior.
we punish those who steal from old ladies, because the stealing is wrong.
I would say we punish those who steal from old ladies because we would prefer the old ladies not be stolen from. It is that preference, the subjective value we all (except the thief of course) place on a society where the meek are not abused by criminals, that causes us to call that behavior “wrong”.
The evolutionary origins of that preference seem pretty obvious. In any group of social animals there will be one or two top physical competitors, and the remainder would be subject to their will. Of those many weaker individuals, the ones who survived to procreate were those who banded together to make bully-free tribes.
Ok, so I punish so as to achieve my preference that old ladies not be stolen from. Yet I do not punish to achieve my preferences in other matters. For that matter, I do not punish to transfer funds from healthy young males to impoverished old ladies who have not been stolen from, though the consequentialist results seem so parallel. I would prefer that old ladies not be impoverished, regardless of whether they became impoverished by theft.
So, if you can explain why I feel the urge to punish in one case but not the others, you are on your way to “solving metaethics’.
I do not punish to transfer funds from healthy young males to impoverished old ladies who have not been stolen from, though the consequentialist results seem so parallel.
I would think that this is usually referred to as “taxation”, and is actually practiced on a fairly regular basis?
The extreme point where we try to make sure everyone receives according to their needs, and provides according to their ability, is “communism”, and seems to be widely considered as a failure state.
“Socialism” seems to have emerged as a compromise between the goal of taxation, and the desire to avoid the communist failure state.
I feel like I’m obviously trivializing something complex here, but I’m genuinely not sure what I’m missing.
I feel like I’m obviously trivializing something complex here, but I’m genuinely not sure what I’m missing.
One difference is that “Perplexed” is talking about anger as an individual emotional response, and you’re finding it analogous to taxation which is something that happens in society, rather than individually, and generally doesn’t have strong emotions with it.
I’m always inclined to classify things like this as psychology. “Perplexed” has an emotional response, that’s fine, we can ask a psychologist to explain it, but I don’t see an useful role of metaethics in that, perhaps because I don’t really know a referent for the word “metaethics”.
One difference is that “Perplexed” is talking about anger as an individual emotional response …
Uh, no I’m not. I haven’t even mentioned anger. I’m talking about punishment. Which, as a moral realist, I’m claiming is a moral issue. And, given my particular flavor of moral realism, that means that there is a closely related practical issue (involving deterence, etc.).
I am not interested in explaining anger as an instinctive signal that it is time to punish—though I’m sure evolutionary psychologists can do so. I’m far more interested in explaining punishment as a moral and practical response to some particular class of actions—actions that I call “immoral”.
As to what handoflixue is missing, I would say that he probably wasn’t paying attention in school when communism and socialism were defined, or else he missed the fact that exhibitions of political “attitude” are not appreciated here. Compared to that, his suggestion that redistributive taxation is something like the kind of punishment I claimed doesn’t exist, …, well that suggestion seems rather innocent.
One difference is that “Perplexed” is talking about anger as an individual emotional response …
Uh, no I’m not. I haven’t even mentioned anger. I’m talking about punishment.
Yes, you’re right (in the sense that you’re making a true statement about what you said before), and I’m wrong. I misunderstood your position.
After acknowledging that I misunderstood you, I’d like to make use of my now probably-correct understanding of what you meant, but unfortunately I have nothing useful to say. I’d need a definition of “moral reality” to start with, assuming that’s what you think you are perceiving as a moral realist.
For that matter, I do not punish to transfer funds from healthy young males to impoverished old ladies who have not been stolen from
There are people who feel there is a moral imperative to do just that. Likewise, there is wide disagreement over what deserves punishment. An orthodox Jew, a Muslim, a Catholic, a Lutheran, a Communist, and a Vulcan walk into a bar… I’m sure we can all see the potential for punchlines.
You may punish action X which violates your preferences because you want to see people punished for action X. You could simultaneously choose not to punish action Y which violates your preferences, because for whatever reason you would prefer people not be punished for it. Others could disagree, and people often do disagree on what deserves punishment and what doesn’t.
Neither side in such a debate is objectively incorrect. Each would indeed prefer their position of punishment or non-punishment.
Neither side in such a debate is objectively incorrect.
And a moral realist, such as myself, thinks you are dead wrong about that. I have offered an objective criterion for choosing sides in the debate, as well as a justification for that criterion that is ultimately based on satisfying people’s preferences to the greatest extent possible. Yet you are unimpressed and go back to reciting your original opinions.
I have offered an objective criterion for choosing sides in the debate, as well as a justification for that criterion that is ultimately based on satisfying people’s preferences to the greatest extent possible.
I couldn’t find where you did this in the parents. Could you link or repeat?
Thanks. Interesting thread. It’s a nice hope. It makes me feel good to imagine that it works, and our alien overlords will therefore be fair :)
Not much for me hangs in the balance with this question. I already know that if I feel like I’m a good person, It feels good. But of course I’m interested in how this self-satisfaction lines up with how people are generally judged. I guess it would become crucial if I became more aggressive. Most people are really cautious (at least as far as their image goes).
Ok, so I punish so as to achieve my preference that old ladies not be stolen from. Yet I do not punish to achieve my preferences in other matters.
I’ll bet you do punish people if those matters make you (and enough others) as angry as old ladies being stolen from does.
For that matter, I do not punish to transfer funds from healthy young males to impoverished old ladies who have not been stolen from, though the consequentialist results seem so parallel.
Anyone who votes for welfare does this. (Not saying this is right or wrong, just a fact.)
So, if you can explain why I feel the urge to punish in one case but not the others, you are on your way to “solving metaethics’.
If something makes you angry, and it is socially acceptable to punish it, you may well decide to punish it. I don’t see anything to solve.
If something makes you angry, and it is socially acceptable to punish it, you may well decide to punish it. I don’t see anything to solve.
Hmmm. Perhaps you don’t see the problem because you think like a scientist. Come up with a causal explanation of why people sometimes punish, and you are done.
I on the other hand, am thinking like an engineer. Simply understanding the universe is pointless. I want to use my understanding to change the universe so that it is more to my taste. Therefore, I want to know when I should punish.
We probably both agree that evolution “invented” anger precisely because organisms that punish at the right times are more successful than organisms that punish at the wrong times or perhaps never punish at all. So anger causes punishment. A scientist is satisfied. But there is more to it than that.
Why did natural selection ‘choose’ to make me angry at some things and not make me angry at other things? Can I decide for myself whether to punish, ignoring the cue of my anger? Will I be more successful if I use my reason to make those decisions rather than using my emotions? And does any of this have anything to do with this mysterious thing ‘morality’ that people keep talking about?
I can understand people not being curious about such questions. But I have trouble understanding why people at a rationality blog site are not only incurious, but so often inclined to brag about their lack of interest!
The thought of mentioning other reasons why to punish (such as to make people behave more to your liking) did cross my mind, but I thought it was obvious enough. In fact, there are still other reasons to punish. Someone might reply to your post, “You are thinking like an engineer. I am thinking like a social animal. I want to know when I should punish: I want to use my understanding of social dynamics to make people respect me more. I want to know what it signals about me when I punish someone.”
As I said here, there are a lot of different reasons to use moral language (most of them sort of dark-arts-ish, which is why I guess that post was downvoted), and likewise there are a lot of different reasons to punish.
Do the evolutionary origins of rationality mean that we can eliminate truth and rationality in favour of belief and opinion? Can the arguments for moral relativism not be redeployed as arguments for alethic relativism?
I cringe at the thought of stealing from an old lady, but I get flippin’ angry at the thought of someone else stealing from an old lady. That is why we punish those who steal from old ladies, but don’t (usually) punish any random people who makes us cringe.
People who fear moral nihilism are not worried about losing control over their own behavior, they are worried about losing control over other people’s behavior.
Is the threat from philosophy really a concern? We control others through punishment. A nihilist is still going to be powerfully incentivized by the threat of punishment.
Is the threat from philosophy really a concern? We control others through punishment.
We (or rather society) controls its members through a public theory of morality, which can also be thought of as a moral Schelling point. Punishments are used to deal with the people who commit disregard the public theory and in so doing help to maintain belief in it. However, without a public theory that most people believe, or at least don’t openly disbelieve, the system for enforcing punishment quickly breaks down.
What people abstractly philosophise about is not all that important. It is what they unconsciously associate with punishment or lowered status that will control their behavior.
It is what they unconsciously associate with punishment or lowered status that will control their behavior.
Which is in the long run influenced by conscious beliefs and abstract philosophy. The history of revolutions should be enough to show that consciously held beliefs and philosophies matter.
Sometimes it is necessary to control what people say through punishment as well as what they do. In some cases punishment via verbal abuse—and the associated threat of lowered status—is enough to exert the desired control.
I think Constant has a good point. When it comes to morality, and controlling people’s behavior it isn’t philosophical reasoning that people turn to, even though solid philosophy usually resolves to reasonably good outcomes. It is punishment, threat and power. Because that is what works for making people do what you want them to do. (Well, reward helps too—but certainly isn’t what ‘morality’ is all about!)
People who fear moral nihilism are not worried about losing control over their own behavior, they are worried about losing control over other people’s behavior.
People who fear irrationality are not worried about losing control over their own beliefs, they are worried about losing control over other people’s beliefs.
Ethics and aesthetics have strong parallels here. Consider this quote from Oscar Wilde:
For we who are working in art cannot accept any theory of beauty in exchange for beauty itself, and, so far from desiring to isolate it in a formula appealing to the intellect, we, on the contrary, seek to materialise it in a form that gives joy to the soul through the senses. We want to create it, not to define it. The definition should follow the work: the work should not adapt itself to the definition.
Whereby any theory of art...
merely serves as after-the-fact justification of the sentiments that were already there.
But I had formerly been a great Lover of Fish, & when this came hot out of the Frying Pan, it smelt admirably well. I balanc’d some time between Principle & inclination: till I recollected, that when the Fish were opened, I saw smaller Fish taken out of their Stomachs:--Then, thought I, if you eat one another, I don’t see why we mayn’t eat you. So I din’d upon Cod very heartily and continu’d to eat with other People, returning only now & then occasionally to a vegetable Diet. So convenient a thing it is to be a reasonable Creature, since it enables one to find or make a Reason for every thing one has a mind to do
In the sociological “let’s all decide what norms to enforce” sense, sure, a lack of “morality” won’t kill anyone. But in the more speculative-fictional “let’s all decide how to self-modify our utility functions” sense, throwing away our actual morality—the set of things we do or do not cringe about doing—in ourselves, or in our descendants, is a very real possibility, and (to some people) a horrible idea to be fought with all one’s might.
What I find unexpected about this is that libertarians (the free-will kind) tend to think in the second sense by default, because they assume that their free will gives them absolute control over their utility function, so if they manage to argue away their morality, then, by gum, they’ll stop cringing! It seems you first have to guide people into realizing that they can’t just consciously change what they instinctively cringe about, before they’ll accept any argument about what they should be consciously scorning.
It seems you first have to guide people into realizing that they can’t just consciously change what they instinctively cringe about, before they’ll accept any argument about what they should be consciously scorning.
But you can consciously change what you “instinctively” cringe about. Otherwise, people couldn’t, say, get over their fear of public speaking.
Sure, there might be some things you can’t change, but one’s moral views aren’t really one of them. (Consider, e.g. all the cultures where killing someone for besmirching your honor is considered a moral good.)
What they don’t see, in my opinion, is that it is their very dread at such a possibility that is really what is keeping them from doing those things.
Why do they have that dread?
A common trope is “my momma raised me better than that.” What if momma has low expectations? That banks on all of morality being inborn instead of inculcated, which strikes me as a terrible bet.
Also, it’s easy to see cases where we cringe at things indirectly. I might not cringe at the thought of cheating on my spouse, but cringe at the thought of social rebuke. If society changes its incentives, then I will change my behavior. In this hypothetical, I need you to keep me honest; if you stop caring about the state of my marriage, then so would I.
(Indeed, that’s the primary reason I go to LW meetups and such; in order to get a social group who gives me the incentives I want to have.)
It is the fear of nihilism, the fear that without their moral compass they might suddenly want to do evil, deplorable things because they’d be A-okay.
Fear of nihilism? Couldn’t it just be guilt, at how you could act like that one really wonderful person (everyone knows one), but just don’t feel inclined?
There is some ineffable something in those who are distinctly uncooperative with requests to define morality or otherwise have a rational discussion on the matter, both here and on all forums where I’ve discussed morality, and I think you’ve hit on what that something is. It is the fear of nihilism, the fear that without their moral compass they might suddenly want to do evil, deplorable things because they’d be A-okay.
What they don’t see, in my opinion, is that it is their very dread at such a possibility that is really what is keeping them from doing those things. “Morality” provides no additional protection; it merely serves as after-the-fact justification of the sentiments that were already there.
We don’t cringe at the thought of stealing from old ladies because it’s wrong, but rather we call it wrong to steal from old ladies because we cringe at the thought.
This is crisp, clear, and one of the best short explanations of the issue I’ve read.
There’s also the fear that if there’s no objective morality, if someone starts doing evil things, you couldn’t make them stop by argument.
Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
(ETA: The point being that I agree with the parent and grandparent posts that people who won’t rationally discuss morality are often afraid of things like this. I’m just wondering whether the belief underlying that fear is true or false.)
On a trivial scale, I’ve revised quite a few opinions based on objective rational arguments that my action was causing harm in ways I had previously been unaware of. The example that immediately comes to mind is modifying my vocabulary to try and avoid offensive words. The concept of privilege and “isms” in general, really.
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure. If everyone obeys the same morality, they should be more ready to defend it, because they know they will be in majority.
Without a shared morality, and it’s twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists. Convincing B is not particularly important, since effective peer pressure merely requires having enough people on board and not having any particular individual on board. In those conversations, I always had the role of B, and I assumed, perhaps mistakenly, that A’s primary goal was to persuade me since A was talking to me. Thank you for the insight.
“Any means possible” is a euphemism for “really big stick”!
Hm. It seems like there’s more to say about that.
For example, the peer pressure to participate in picking on low-status figures in a high-school class certainly appears to be strong, and not difficult to organize—indeed, it occurs spontaneously.
I suppose I’m willing to accept that those who refuse to participate aren’t “wrongdoers”, but I’m not sure why that should matter; if there’s a distinction between wrongdoers and other norm-violators you are calling out here, it would benefit from being called out more explicitly.
Conversely, I’m also willing to accept that picking on the low-status figures is the shared morality in this case, but in that case I think the whole conversation becomes less connotationally misleading if we talk about shared behavioral norms and leave the term “morality” (let alone “objective morality”) out of it.
I would say that “becoming strong and opressing the weak” is the default goal. You don’t need any kind of morality here, it’s just biology of a social species. Being strong has natural rewards.
Morality is what allows you to have alternative goals. Morality means that “X is important too”, sometimes even more important than being strong (though usually it is good to both be strong and do X). Morality gives you social rewards for doing X.
Being strong is favored by genes, doing X is favored by (X-promoting) memes. In the absence of memes (more precisely in absence of strong memes saying what is right and wrong), humans fall back on their natural social behavior, the pecking order. In the presence of such memes, humans try to do X; and also at the same time secretly try to be strong, but they cannot use too obvious means for that.
Technically, we could call the pecking order a “null morality”; like the “null hypothesis” in statistics.
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too. Because life is full of iterated prisoner’s dilemma, because gene survival requires the survival of your close relatives, because of the way the brain is shaped (like the fact the empathy very likely comes, at least in part, from the way we reuse our own brain circuits to predict the behavior of others).
Moral theories are “artificial constructs”, as are all theories. They are generalization, they are abstraction, they can conflict with the “genetic morality”, and yes, memes play a huge role in morality. But the core of morality comes from our genes—care for our family, “tit-for-tat with initial cooperation” as the winning strategy for IPD, empathy, …
Even if ultimately everything comes from the genes, we have to learn some things, while other things come rather automatically.
We educate children to behave nicely to others—they don’t get this ability automatically just because of their genes. On the other hand, children are able to create “Lord of the Flies”-like systems at school without being taught so. Both behaviors are based on evolution, both promote our genes in certain situations, but still one is the default option, and the other must be taught (is transferred by memes).
And by the way, Prisonners’ Dilemma is not a perfect model of reality, and the differences are very relevant for this topic. Prisonners’ Dilemma or Iterated Prisonners’ Dilemma are modelled as series of 1:1 encounters, where the information remains hidden between the interacting players; each player tries to maximize their own utility; and each encounter is scored independently. In real life, people observe what others are doing even when interacting with others; people have families and are willing to sacrifice some of their utility to increase their family’s utility; and results of one encounter may influence your survival or death, your health, your prestige etc., which influence the rules of the following encounter. This results in new strategies, such as “signal a membership to a powerful group G, play tit-for-tat with initial cooperation against members of G, and defect against everyone else” which will work if the group G has a majority. Now the problem is how will people agree what is the right group G? In small societies, family can be such group; in larger societies memetic similarity can play the same role—if you consider that humans are not automatically strategic, why not make a meme M, which teaches them this strategy and at the same times defines group G as “people who share the meme M”? Here comes the morality, religion, football team fans, et cetera.
OK, cool; thanks for clarifying.
I would certainly agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult. I would also agree that there always tends to be some set of behavioral norms, often several conflicting sets, some of which we may not want to label “morality”.
It is not clear to me that the distinction you want to draw between “natural” and “alternative” norms is quite as clearcut as you make it sound. Nor is it clear to me that that distinction maps quite as readily to genetic vs. cultural factors as you imply here.
But I would certainly agree that some norms are more easily arrived at (that is, require less extensive training to impart) than others, and that in the absence of strong enforcement of the harder to impart norms (what you’re describing as “alternative goals”/”morality” propogated by memes) the easier-to-impart ones (what you describe as “natural” and genetically constrained) tend to influence behavior more.
I guess my comment seems too dichotomic; I did not intend it that way. Basicly I wanted to say that if you have e.g. children without proper upbringing (or in an environment that allows them to act against their upbringing), their behavior easily collapses to something most dramatically described in the “Lord of the Flies” book, which is rather similar to what social animals do: establishing group hierarchy by using intra-group violence and threats. I call it “natural” because this is what happens unless people use some strategy to prevent it.
But of course both building the pecking order and the desire to avoid the negative consequences for people at the bottom are natural, i.e. driven by self-interest of our genes; it’s just that the former is easier to do, while the latter requires some thinking, strategy, coordination, infrastructure (laws, police, morality, religion, etc.) to be done successfully. It feels like worth doing, but it can be done in a few different ways, and we often disagree about the details.
It’s like in the Prisonners’ Dilemma, the choice to defect is in short term (1 turn) always better than to cooperate; and if you imagine an agent without a memory or unable to distinguish between individual players, then in a world consisting of such agents, always defecting would be the winning strategy. Only the possibility to remember and iterate allows the strategy of punishment, and now “tit-for-tat with initial cooperation” becomes a successful implementation of the more general principle “cooperate with those who cooperate with you, and punish those who defect”.
But in real life sometimes those who can punish are different from those who have been harmed. For example, if someone steals from you, you will try to punish them—but for a society without theft, it is necessary that people punish even those who stole from someone else. (Otherwise the thieves would just have to carefully select their targets among the weaker people.) Here we have a problem, because engaging in punishment has some costs (if you see someone stealing and try to stop them, the thief may hurt you) and no direct benefit for the punisher. This can be fixed by a system when people are rewarded for punishing those who have harmed someone else. For such system to work, it is necessary to have an agreement about what is harm, what is the proper punishment, and what is the reward (social esteem for the hero, salary for policeman, etc.). This is difficult to organize.
Yes, I continue to agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult, and that some norms are easier to coordinate (more natural, if you like) than others.
And yes, the dichotomy is part of what I’m skeptical about. Even in a “pecking order” environment, for example, I suspect a norm saying that low-status tribe members don’t get to steal from high-status members is relatively easy to coordinate. That’s not the same as my culture’s notion of theft, but neither is it the same as a complete absence of a notion of theft. I suspect it’s much more of a continuum, and much more variable, than you make it sound.
I agree there is a continuum of possibilities, that’s how the things were developed. But it does not mean that all parts of the continuum exist in reality with the same frequency, or even that the frequency is a monotonous function.
I guess I have troubles explaining what I mean, so I will use a metaphor—computer. You can have no computer. You can use fingers. You can use pebbles. You can use abacus. You can have a mechanical calculator, vacuum-tube calculator, or some kind of integrated-circuit computer. It’s not literally a continuum, but there are many steps. But now make a histogram of how often people around you use this or that… and you will probably find that most people use some integrated-circuit computing machine, or nothing. There is very little in between. So in theory, there is a continuum, but it can be approximated as just having two choices: an integrated-circuit computer, or no computing machine. There is very little incentive to use abacus, or even to invent one. You don’t upgrade from “no calculator” to “integrated-circuit calculator” by discovering abacus etc., you just go to shop to buy one. And even those people who design and build integrated-circuit calculators, they don’t start from abacus. This part in the middle does not exist anymore, because compared with both extremes, it is not cost-effective.
It’s not the same with morality, but my point is that there is so much morality around (it feels kind of funny when I write it), that very few people are inventing the morality from scratch. You copy it, or you ignore it; or you copy some parts, or you copy it and forget some parts. Inventing it all in one lifetime is almost impossible. So to me it seems safe to say that the higher levels must be carried by memes. It’s like saying that you can find pebbles or invent abacus, but you have to buy an integrated-circuit computer, unless you are an exceptional person.
I agree with you that very few behavioral norms are invented from scratch, and that the more complex ones pretty much never are, and that they must therefore be propagated culturally.
That said, your analogy is actually a good one, in that I have the same objection to the analogy that I had to the original.
Unlike you, I suspect that there’s quite a lot of in between: some people use integrated-circuit computers, some people (often the same people) use pen and paper, some people use a method of successive approximation, some people count on their fingers. It depends on the people and it depends on the kind of calculation they are doing and it depends on the context in which they’re doing it; I might open an excel spreadsheet to calculate 15% of a number if I’m sitting in front of my computer, I might calculate it as “a tenth plus half of a rounded-up tenth” if I’m working out a tip at a restaurant, I might solve it with pencil and paper if it’s the tenth in a series of arithmetic problems I’m solving on a neuropsych examination.
When you say “most people use some integrated-circuit computing machine, or nothing” you end up excluding a wide range of actual human behavior in the real world.
Analogously, I think that when you talk about the vast excluded middle between “morality” and “pecking order” you exclude a similarly wide range of actual human behavior in the real world.
When that range is “approximated as just having two choices” something important is lost. If you have some specific analytical goal in mind, perhaps the approximation is good enough for that goal… I’m afraid I’ve lost track of what your goal might be, here. But in general, I don’t accept it as a good-enough approximation; the excluded middle seems worthy of consideration.
Do people change their minds much about anything?
I suspect what people are afraid of s being caught out in holding an unarguable position
There are occasional religious conversions, and if you follow the thread up to the link below I apparently got “syllogism” to give up on preference utilitarianism, whatever that is:
http://lesswrong.com/lw/435/what_is_eliezer_yudkowskys_metaethical_theory/3yj3
Both hypotheses make sense to me: perhaps they’re afraid that it won’t work to persuade people if they don’t defend it, and perhaps it’s simpler and they know they have no position to argue from but they still don’t want to lose.
For better or worse, I think Eugeine Nier was arguing a point about morality identical to one of yours (Peterdjones) and he started dodging my questions at:
http://lesswrong.com/lw/5eh/what_is_metaethics/42ul
If you are aware that people are dodging because they have an unarguable position, perhaps you don’t want to participate in that. Do you want to help him out and answer the question I asked there?
Done.
Yes, this is a common occurrence. St. Augustine is, for instance, a well known example of such an occurrence.
I’m inclined to believe you, but his biography on Wikipedia describes a long and varied life, and in a few minutes of examination I did not find any clear examples of arguments about morality persuading anybody to stop doing evil. I’m sure it’s in there somewhere. Which event(s) in his life are you talking about?
Here is where it talks about it in the Wiki article: http://en.wikipedia.org/wiki/Augustine_of_Hippo#Christian_conversion
A full account is given in The Confessions of St. Augustine.
Sorry, if that’s all you have, it’s not what I’m looking for. What evil did he stop doing because he converted to Christianity? The worst things I see in the biography there were teaching rhetoric and waiting patiently for his 11 year old fiancee to turn 13 so he could marry her. Those activities were both consistent with cultural norms of the time. Neither of seem to have the right flavor to make me to want to try arguing morality with someone who is pointing a gun at my head.
He also gave up sleeping with various mistresses, however, given current culture, I doubt you think that is evil.
Arguing morality with someone who is holding a gun to your head doesn’t sound like a very smart thing to do. The most I have done while being held up was provide the assailant a set of scriptures with a number to call if he wanted to discuss morality while not holding a gun. If they are holding a gun or otherwise threatening current violence to you then that is usually not the time to be discussing morality as they are most likely not acting rationally.
Discussing morality with someone that is suicidal can sometimes help. Still, one should call for professional assistance if it is available.
One problem with arguing rationality with someone who as a gun to your head is time: a rational argument for a substantial change tends to take a fair amount of time. You might be able to convince someone with quick “sound bites”, but I’m not sure I’d really call that a rational argument.
I think this analysis oversimplifies the fear of nihilism. For example, we punish those who steal from old ladies, because the stealing is wrong. But we don’t punish everyone whose behavior makes us cringe. At least I hope not.
People who fear moral nihilism are not worried about losing control over their own behavior, they are worried about losing control over other people’s behavior.
I would say we punish those who steal from old ladies because we would prefer the old ladies not be stolen from. It is that preference, the subjective value we all (except the thief of course) place on a society where the meek are not abused by criminals, that causes us to call that behavior “wrong”.
The evolutionary origins of that preference seem pretty obvious. In any group of social animals there will be one or two top physical competitors, and the remainder would be subject to their will. Of those many weaker individuals, the ones who survived to procreate were those who banded together to make bully-free tribes.
Ok, so I punish so as to achieve my preference that old ladies not be stolen from. Yet I do not punish to achieve my preferences in other matters. For that matter, I do not punish to transfer funds from healthy young males to impoverished old ladies who have not been stolen from, though the consequentialist results seem so parallel. I would prefer that old ladies not be impoverished, regardless of whether they became impoverished by theft.
So, if you can explain why I feel the urge to punish in one case but not the others, you are on your way to “solving metaethics’.
I would think that this is usually referred to as “taxation”, and is actually practiced on a fairly regular basis?
The extreme point where we try to make sure everyone receives according to their needs, and provides according to their ability, is “communism”, and seems to be widely considered as a failure state.
“Socialism” seems to have emerged as a compromise between the goal of taxation, and the desire to avoid the communist failure state.
I feel like I’m obviously trivializing something complex here, but I’m genuinely not sure what I’m missing.
One difference is that “Perplexed” is talking about anger as an individual emotional response, and you’re finding it analogous to taxation which is something that happens in society, rather than individually, and generally doesn’t have strong emotions with it.
I’m always inclined to classify things like this as psychology. “Perplexed” has an emotional response, that’s fine, we can ask a psychologist to explain it, but I don’t see an useful role of metaethics in that, perhaps because I don’t really know a referent for the word “metaethics”.
Uh, no I’m not. I haven’t even mentioned anger. I’m talking about punishment. Which, as a moral realist, I’m claiming is a moral issue. And, given my particular flavor of moral realism, that means that there is a closely related practical issue (involving deterence, etc.).
I am not interested in explaining anger as an instinctive signal that it is time to punish—though I’m sure evolutionary psychologists can do so. I’m far more interested in explaining punishment as a moral and practical response to some particular class of actions—actions that I call “immoral”.
As to what handoflixue is missing, I would say that he probably wasn’t paying attention in school when communism and socialism were defined, or else he missed the fact that exhibitions of political “attitude” are not appreciated here. Compared to that, his suggestion that redistributive taxation is something like the kind of punishment I claimed doesn’t exist, …, well that suggestion seems rather innocent.
Yes, you’re right (in the sense that you’re making a true statement about what you said before), and I’m wrong. I misunderstood your position.
After acknowledging that I misunderstood you, I’d like to make use of my now probably-correct understanding of what you meant, but unfortunately I have nothing useful to say. I’d need a definition of “moral reality” to start with, assuming that’s what you think you are perceiving as a moral realist.
There are people who feel there is a moral imperative to do just that. Likewise, there is wide disagreement over what deserves punishment. An orthodox Jew, a Muslim, a Catholic, a Lutheran, a Communist, and a Vulcan walk into a bar… I’m sure we can all see the potential for punchlines.
You may punish action X which violates your preferences because you want to see people punished for action X. You could simultaneously choose not to punish action Y which violates your preferences, because for whatever reason you would prefer people not be punished for it. Others could disagree, and people often do disagree on what deserves punishment and what doesn’t.
Neither side in such a debate is objectively incorrect. Each would indeed prefer their position of punishment or non-punishment.
And a moral realist, such as myself, thinks you are dead wrong about that. I have offered an objective criterion for choosing sides in the debate, as well as a justification for that criterion that is ultimately based on satisfying people’s preferences to the greatest extent possible. Yet you are unimpressed and go back to reciting your original opinions.
Oh well. I tried. HAND.
I couldn’t find where you did this in the parents. Could you link or repeat?
Whoops. You are right. I made this proposal here and here and in the discussions that followed.
Thanks. Interesting thread. It’s a nice hope. It makes me feel good to imagine that it works, and our alien overlords will therefore be fair :)
Not much for me hangs in the balance with this question. I already know that if I feel like I’m a good person, It feels good. But of course I’m interested in how this self-satisfaction lines up with how people are generally judged. I guess it would become crucial if I became more aggressive. Most people are really cautious (at least as far as their image goes).
I’ll bet you do punish people if those matters make you (and enough others) as angry as old ladies being stolen from does.
Anyone who votes for welfare does this. (Not saying this is right or wrong, just a fact.)
If something makes you angry, and it is socially acceptable to punish it, you may well decide to punish it. I don’t see anything to solve.
Hmmm. Perhaps you don’t see the problem because you think like a scientist. Come up with a causal explanation of why people sometimes punish, and you are done.
I on the other hand, am thinking like an engineer. Simply understanding the universe is pointless. I want to use my understanding to change the universe so that it is more to my taste. Therefore, I want to know when I should punish.
We probably both agree that evolution “invented” anger precisely because organisms that punish at the right times are more successful than organisms that punish at the wrong times or perhaps never punish at all. So anger causes punishment. A scientist is satisfied. But there is more to it than that.
Why did natural selection ‘choose’ to make me angry at some things and not make me angry at other things? Can I decide for myself whether to punish, ignoring the cue of my anger? Will I be more successful if I use my reason to make those decisions rather than using my emotions? And does any of this have anything to do with this mysterious thing ‘morality’ that people keep talking about?
I can understand people not being curious about such questions. But I have trouble understanding why people at a rationality blog site are not only incurious, but so often inclined to brag about their lack of interest!
The thought of mentioning other reasons why to punish (such as to make people behave more to your liking) did cross my mind, but I thought it was obvious enough. In fact, there are still other reasons to punish. Someone might reply to your post, “You are thinking like an engineer. I am thinking like a social animal. I want to know when I should punish: I want to use my understanding of social dynamics to make people respect me more. I want to know what it signals about me when I punish someone.”
As I said here, there are a lot of different reasons to use moral language (most of them sort of dark-arts-ish, which is why I guess that post was downvoted), and likewise there are a lot of different reasons to punish.
Do the evolutionary origins of rationality mean that we can eliminate truth and rationality in favour of belief and opinion? Can the arguments for moral relativism not be redeployed as arguments for alethic relativism?
The same idea holds for other people’s behavior:
I cringe at the thought of stealing from an old lady, but I get flippin’ angry at the thought of someone else stealing from an old lady. That is why we punish those who steal from old ladies, but don’t (usually) punish any random people who makes us cringe.
Is the threat from philosophy really a concern? We control others through punishment. A nihilist is still going to be powerfully incentivized by the threat of punishment.
We (or rather society) controls its members through a public theory of morality, which can also be thought of as a moral Schelling point. Punishments are used to deal with the people who commit disregard the public theory and in so doing help to maintain belief in it. However, without a public theory that most people believe, or at least don’t openly disbelieve, the system for enforcing punishment quickly breaks down.
What people abstractly philosophise about is not all that important. It is what they unconsciously associate with punishment or lowered status that will control their behavior.
Which is in the long run influenced by conscious beliefs and abstract philosophy. The history of revolutions should be enough to show that consciously held beliefs and philosophies matter.
Sometimes it is necessary to control what people say through punishment as well as what they do. In some cases punishment via verbal abuse—and the associated threat of lowered status—is enough to exert the desired control.
I think Constant has a good point. When it comes to morality, and controlling people’s behavior it isn’t philosophical reasoning that people turn to, even though solid philosophy usually resolves to reasonably good outcomes. It is punishment, threat and power. Because that is what works for making people do what you want them to do. (Well, reward helps too—but certainly isn’t what ‘morality’ is all about!)
People who fear irrationality are not worried about losing control over their own beliefs, they are worried about losing control over other people’s beliefs.
Ethics and aesthetics have strong parallels here. Consider this quote from Oscar Wilde:
Whereby any theory of art...
Or Ben Franklin, contemplating his vegetarianism:
Now there’s a Rationality Quote. ;-)
Indeed.
In the sociological “let’s all decide what norms to enforce” sense, sure, a lack of “morality” won’t kill anyone. But in the more speculative-fictional “let’s all decide how to self-modify our utility functions” sense, throwing away our actual morality—the set of things we do or do not cringe about doing—in ourselves, or in our descendants, is a very real possibility, and (to some people) a horrible idea to be fought with all one’s might.
What I find unexpected about this is that libertarians (the free-will kind) tend to think in the second sense by default, because they assume that their free will gives them absolute control over their utility function, so if they manage to argue away their morality, then, by gum, they’ll stop cringing! It seems you first have to guide people into realizing that they can’t just consciously change what they instinctively cringe about, before they’ll accept any argument about what they should be consciously scorning.
But you can consciously change what you “instinctively” cringe about. Otherwise, people couldn’t, say, get over their fear of public speaking.
Sure, there might be some things you can’t change, but one’s moral views aren’t really one of them. (Consider, e.g. all the cultures where killing someone for besmirching your honor is considered a moral good.)
Why do they have that dread?
A common trope is “my momma raised me better than that.” What if momma has low expectations? That banks on all of morality being inborn instead of inculcated, which strikes me as a terrible bet.
Also, it’s easy to see cases where we cringe at things indirectly. I might not cringe at the thought of cheating on my spouse, but cringe at the thought of social rebuke. If society changes its incentives, then I will change my behavior. In this hypothetical, I need you to keep me honest; if you stop caring about the state of my marriage, then so would I.
(Indeed, that’s the primary reason I go to LW meetups and such; in order to get a social group who gives me the incentives I want to have.)
Fear of nihilism? Couldn’t it just be guilt, at how you could act like that one really wonderful person (everyone knows one), but just don’t feel inclined?