The concept of “deserve” can be harmful. We like to think about whether we “deserve” what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert “deserve” into the future: deserve your luck by exploiting it.
Of course, “deserve” can be a useful social mechanism to increase desired actions. But only within that context.
Also “need”. There’s always another option, and pretending sufficiently bad options don’t exist can interfere with expected value estimations.
And “should” in the moralizing sense. Don’t let yourself say “I should do X”. Either do it or don’t. Yeah, you’re conflicted. If you don’t know how to resolve it on the spot, at least be honest and say “I don’t know whether I want X or not X”. As applied to others, don’t say “he should do X!”. Apparently he’s not doing X, and if you’re specific about why it is less frustrating and effective solutions are more visible. “He does X because it’s clearly in his best interests, even despite my shaming. Oh...”—or again, if you can’t figure it out, be honest about it “I have no idea why he does X”
Don’t let yourself say “I should do X”. Either do it or don’t.
That would work nice if I was so devoid of dynamic inconsistency that “I don’t feel like getting out of bed” would reliably entail “I won’t regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don’t feel like doing but I know I would regret not doing.
if you’re specific about why it is less frustrating
This is a fact about you, not about “should”. If “should” is part of the world, you shouldn’t remove it from your map just because you find other people frustrating.
and effective solutions are more visible.
One common, often effective strategy is to tell people they should do the thing.
if you can’t figure it out, be honest about it “I have no idea why he does X”
The correct response to meeting a child murderer is “No, Stop! You should not do that!”, not “Please explain why you are killing that child.” (also physical force)
This is a fact about you, not about “should”. If “should” is part of the world, you shouldn’t remove it from your map just because you find other people frustrating.
It’s not about having conveniently blank maps. It’s about having more precise maps.
I realize that you won’t be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options—and without changing your preferences. Additionally, the majority of the time this happens, “shoulding” is no longer the best choice available.
One common, often effective strategy is to tell people they should do the thing.
Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it’s short hand for “you’d get more of what you want if you do”/”I and others will shame you if you don’t”. It’s just that so often that’s not enough.
The correct response to meeting a child murderer is “No, Stop! You should not do that!”, not “Please explain why you are killing that child.” (also physical force)
And this is a good example. “Correct” responses oughtta get good results; what result do you anticipate? Surely not “Oh, sorry. didn’t realize… I’ll stop now”. It sure feels appropriate to ‘should’ here, but that’s a quirk of your psychology that focuses you on one action to the exclusion of others.
Personally, I wouldn’t “should” a murderer any more than I’d “should” a paperclip maximizer. I’d use force, threats of force and maybe even calculated persuasion. Funny enough, were I to attempt to therapy a child murderer (and bold claim here—I think I could do it), I’d start with “so why do ya kill kids?”
Mostly, the result I anticipate from “should”ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I’ve invoked by “should”ing. That is, it’s a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.)
Alternative formulas like “you’ll get more of what you want if you don’t do that!” or “I prefer you not do that!” or “I and others will shame you if you do that!” don’t seem to work as well for this purpose.
But of course you’re correct that some norm-violators don’t respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.
“Should” is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It’s a distinctly human invention, and it’s meaning shifts as the user desires. Moral obligations are great for social interactions, but they don’t reflect anything deeper than an extension of tribal politics. Saying “you should x” (in the moral sense of the word) is just equivalent to saying “I would prefer you to x”, but with bonus social pressure.
Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective.
Next time you meet a child murderer, you just go and keep on telling him he shouldn’t do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds “I have to sacrifice them to the magical alien unicorns or they’ll kill my family” then I can explain to him that the magical alien unicorns dont’t exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that’s not the case, and if I don’t ask/observe to figure out what his motivations are I’ll never know how to stop him when physical force is no option.
Saying “you should x” (in the moral sense of the word) is just equivalent to saying “I would prefer you to x”, but with bonus social pressure.
I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between “I should X” and “I would prefer to not-X”. If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children.
My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn’t involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.)
If my guess is correct it would also explain why there’s disagreement about whether morality is objective or subjective (morality is a personal preference, but it’s also an attempt to remove personal biases—it’s by itself an attempt to move from subjective preferences to objective preferences).
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of “should”. The original comment on this topic was claiming that “should” is a bad concept; instead of thinking “I should x” or “I shouldn’t do x”, on top of considering “I want to/don’t want to x”, just look at want/do not want. “I should x” doesn’t help you resolve “do I want to x”, and the second question is the only one that counts.
I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it’s simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature “I should x but I want to y”, stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner’s dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of “should”, you will be free from that type of trap unless it is in your best interests to remain there.
Our moral intuitions do not exist for good reasons. “Fairness” and it’s ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo “morality”, “should”, and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying “you should c because it’s right”, and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you’re allowed to like yourself more than you like other people.
Objective morality is meaningless, and subjective morality reduces to preferences.
These aren’t the only two possibilities. Lots of important aspects of the world are socially constructed. There’s no objective truth about the owner of a given plot of land, but it’s not purely subjective either—and if you don’t believe me, try explaining it to the judge if you are arrested for trespassing.
Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It’s perfectly coherent for somebody to say “society believes X is immoral but I don’t personally think it’s wrong”. I think it’s even coherent for somebody to say “X is immoral but I intend to do it anyway.”
You’re sneaking in connotations. “Morality” has a much stronger connotation than “things that other people think are bad for me to do.” You can’t simply define the word to mean something convenient, because the connotations won’t go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn’t change.
Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say “x is immoral” then I haven’t actually told you anything about x. In normal usage I’ve told you that I think people in general shouldn’t do x, but you don’t know why I think that unless you know my value system; you shouldn’t draw any conclusions about whether you think people should or shouldn’t x, other than due to the threat of my retaliation.
“Morality” in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying “x is morally wrong” or “x is morally right” doesn’t have any additional effect on our actions, once we’ve run the best preference algorithms we have over them. Every single bit of information contained in “morally right/wrong” is also contained in our other decision algorithms, often in a more accurate form. It’s not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
My original point was just that “subjective versus objective” is a false dichotomy in this context. I don’t want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world.
Turning now to the substance of whether moral or judgement words (“should”, “ought”, “honest”, etc) are bad concepts --
At work, we routinely have conversations about “is it ethical/honest to do X”, or “what’s the most ethical way to deal with circumstance Y”. And we do not mean “what is our private preference about outcomes or rules”—we mean something imprecise but more like “what would our peers think of us if they knew” or “what do we think our peers ought to think of us if they knew”. We aren’t being very precise how much is objective, subjective, and socially constructed, but I don’t see that we would gain from trying to speak with more precision than our thoughts actually have.
Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using ‘ethical’ instead of other terms smuggles in a lot of connotation. That’s the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively.
The original topic was “harmful” concepts, I believe, and I don’t think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn’t protect you from being wrong; you can talk all day about “is it ethical to steal this cookie” but you are wasting your time. Either you’re actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you’re babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing “is this moral”, unless what you’re really asking is “What are the social consequences” or “will person x think this is immoral” or whatever. It’s a dangerous habit epistemically and serves no instrumental purpose.
“Should” is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences.
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive.
But that’s not what they mean, or at least not all that they mean.
Look, I’m a fan of Stirner and a moral subjectviist, so you don’t have to explain the nonsense people have in their heads with regard to morality to me. I’m on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.
But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It’s handy to have a label to separate those out, and “moral” is the accurate one, regardless of the other nonsense people have in their heads about morality.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
but the very feeling that makes them seem different at all stems from the core confusion!
I don’t think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn’t what makes my preference for ice cream preferences different from my moral preferences.
Is there a behavioral cluster about “moral”? Sure.
Do moral preferences really have different implications than preferences about shoes and I’ve cream?
How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don’t eat it? For their tolerance of a preference in ice cream in others?
Not many that I see. So yeah, it’s really different.
I mean, a preference is a preference is a term in a utility function.
And matter is matter, whether alive or dead, whether your shoe or your mom.
“Deserve” is harmful because we would often rather destroy utility than allow an undeserved outcome distribution. For instance, most people would probably rather punish a criminal than reform him. I nominate “justice” as the more basic bad concept. It’s a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now, a shortcut that lets them choose actions that probably increase utility in the long run. But it is a bad concept for people who can think more rigorously.
A lot of these “bad concepts” will probably be things that are useful given limited rationality.
“Are the gods not just?”
“Oh no, child. What would become us us if they were?”
I’d say “justice” is a heuristics; better than nothing, but not the best possible option.
For instance, most people would probably rather punish a criminal than reform him.
This could be connected with their beliefs about probability of successfully reforming the criminal. I guess the probability strongly depends on the type of crime and type of treatment, and even is not the same for all classes of criminals (e.g. sociopaths vs. people in relative rare situation that overwhelmed them). They may fear that with a good lawyer, “reform, don’t punish” is simply a “get out of jail free” card.
To improve this situation, it would help to make the statistics of reform successes widely known. But I would expect that in some situations, they are just not available. This is partially an availability heuristics on my part, and partially my model saying that many good intentions fail in real life.
Also, what about unique crimes? For example, an old person murders their only child, and they do not want to have any other child, ever. Most likely, they will never do the same crime again. How specifically would you reform them? How would you measure the success of reforming them? If we are reasonably sure they never do the same thing again, even without a treatment, then… should we just shrug and let them go?
The important part of the punishment is the precommitment to punish. If a crime already happened, causing e.g. pain to the criminal does not undo the past. But if the crime is yet in the future, precommiting to cause pain to the criminal influences the criminal’s outcome matrix. Will precommitment to reforming have similar effects? (“Don’t shoot him, or… I will explain you why shooting people is wrong, and then you will feel bad about it!”)
I nominate “justice” as the more basic bad concept. It’s a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now,
Actually, I think that’s some of what they are keeping in mind and find motivating.
If they were able to keep it in mind separately, they could include that in their calculations, instead of using justice as a kind of sufficient statistic to summarize it.
You can still use precommitment, but tie it to consequences rather than to Justice. Take Edward Snowden. Say that the socially-optimal outcome is to learn about the most alarming covert government programs, but not about all covert programs. So you want some Edward Snowdens to reveal some operations, but you don’t want that to happen very often. The optimal behavior may be to precommit to injustice, punishing government employees who reveal secrets regardless of whether their actions were justified.
International espionage is probably one of the worst examples to attempt to generalize concepts like justice from. It’s probably better to start with simpler (and more common) examples like theft or murder and then use the concepts developed on the simpler examples to look at the more complicated one.
Upvoted, but I would note that it’s interesting to see a moral value listed in a (supposedly value-neutral) “bad concepts repository”. The idea that “deserve” in the sense in which you mention is a harmful and meaningless concept is a rather consequentialist notion, and seeing this so highly upvoted says something about the ethics that this community has adopted—and if I’m right in assuming that a lot of the upvoters probably thought this a purely factual confusion with no real ethical element, then it says a bit about the moral axioms that we tend to take for granted.
Again, not saying this as a criticism, just as something that I found interesting.
E.g. part of my morality used to say that if I only deserved some pleasures in case I had acted in the right ways or was good enough: and this had nothing to do with a consequentialist it-is-a-way-of-motivating-myself-to-act-right logic, it was simply an intrinsic value that I would to some extent have considered morally right to have even if possessing it was actively harmful. Somebody coming along and telling me that “in reality, your value is not grounded in any concrete mechanism” would have had me going “well, in that case your value of murder being bad is not grounded in any concrete mechanism either”. (A comment saying that “the concept of murder can be harmful, since in reality there is no mechanism for determining what’s murder” probably wouldn’t have been upvoted.)
We like to think about whether we “deserve” what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism.
So you’re saying we like thinking about a moral property, but we’re wrong to do so, because this property is not reliably instanciated? Desert theorist do not need to disagree—there’s no law of physics that means people necessarily get what they deserve. Rather, we are supposed to be the mechanism—we must regulate our own affairs so as to ensure that people get what they deserve.
Perhaps the bad concept here is actually “karma”, which I understand roughly to be the claim that there is a law of physics that means people necessarily get what they deserve.
To me deserve flows from experiencing the predicatable consequences of one’s actions.
If the cultural norms for my area is to wait in line at the bank, checkout, restraunt, etc., and I do so, I deserve to be served when I reach the front of it (barring any prior actions towards the owners like theft, or personal connections). Someone who comes in later does not deserve to be served until others in the queue have been.
Or, less in a less relative example, if I see dark clouds and go out dressed for warm weather when I have rain clothes at hand, I deserve to feel uncomforable.
I do not deserve to be assaulted by random strangers, when I have not personally performed any actions that would initaiate conflict that violence would resolve or done anything which tends to anger other people.
Of course, the certainty of getting what one deserves is not 1, and one must expect that the unexpected will happen in some context eventually.
On the flipside, egalitarian instincts (e.g. “justice and liberty for all”, “all men are created equal”) are often deemed desirable, even though many a times “deserve” stems from such concepts of how a society should supposedly be like, “what kind of society I want to live in”.
There is a tension between decrying “deserve” as harmful, while e.g. espousing the (in many cases) egalitarian instincts they stem from (“I should have as many tech toys as my neighbor”, “I’m trying to keep up with the Joneses”, etc.).
I think this is a different flavor of deserving. Stabilizer is using deserve to explain how people got into the current situation while you’re using it to describe desirable future situation. The danger is assuming that because we are capable of acting in a way that gives people what they deserve, that in all situations someone must have already done so, so everyone must have acted in such a way that they have earned their present circumstances through moral actions.
The concept of “deserve” is only harmful to the extent people apply it to things they don’t in fact deserve. In this respect, it’s no different from the concept of “truth”.
It’s part of a larger pattern of mistaking your interpretations of reality as reality itself. There’s no ephemeral labels floating around that are objectively true—you can’t talk too much, work too hard, or be pathetic. You can only say things that other people would prefer not to hear, do work to the exclusion of other objectives, or be pitied by someone.
There’s no ephemeral labels floating around that are objectively true—you can’t talk too much, work too hard, or be pathetic.
If excessive work causes an overuse injury or illness then “worked too hard” would seem to be a legitimate way to describe reality. (Agree with the other two.)
I agree with that. I also suspect many people treat deserving of rewards and deserving of punishments as separate concepts. As a result they might reject one while staying attached to the other and become even more confused.
The concept of “deserve” can be harmful. We like to think about whether we “deserve” what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert “deserve” into the future: deserve your luck by exploiting it.
Of course, “deserve” can be a useful social mechanism to increase desired actions. But only within that context.
Also “need”. There’s always another option, and pretending sufficiently bad options don’t exist can interfere with expected value estimations.
And “should” in the moralizing sense. Don’t let yourself say “I should do X”. Either do it or don’t. Yeah, you’re conflicted. If you don’t know how to resolve it on the spot, at least be honest and say “I don’t know whether I want X or not X”. As applied to others, don’t say “he should do X!”. Apparently he’s not doing X, and if you’re specific about why it is less frustrating and effective solutions are more visible. “He does X because it’s clearly in his best interests, even despite my shaming. Oh...”—or again, if you can’t figure it out, be honest about it “I have no idea why he does X”
That would work nice if I was so devoid of dynamic inconsistency that “I don’t feel like getting out of bed” would reliably entail “I won’t regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don’t feel like doing but I know I would regret not doing.
This John Holt quote is about exactly this.
This is a fact about you, not about “should”. If “should” is part of the world, you shouldn’t remove it from your map just because you find other people frustrating.
One common, often effective strategy is to tell people they should do the thing.
The correct response to meeting a child murderer is “No, Stop! You should not do that!”, not “Please explain why you are killing that child.” (also physical force)
It’s not about having conveniently blank maps. It’s about having more precise maps.
I realize that you won’t be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options—and without changing your preferences. Additionally, the majority of the time this happens, “shoulding” is no longer the best choice available.
Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it’s short hand for “you’d get more of what you want if you do”/”I and others will shame you if you don’t”. It’s just that so often that’s not enough.
And this is a good example. “Correct” responses oughtta get good results; what result do you anticipate? Surely not “Oh, sorry. didn’t realize… I’ll stop now”. It sure feels appropriate to ‘should’ here, but that’s a quirk of your psychology that focuses you on one action to the exclusion of others.
Personally, I wouldn’t “should” a murderer any more than I’d “should” a paperclip maximizer. I’d use force, threats of force and maybe even calculated persuasion. Funny enough, were I to attempt to therapy a child murderer (and bold claim here—I think I could do it), I’d start with “so why do ya kill kids?”
Mostly, the result I anticipate from “should”ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I’ve invoked by “should”ing. That is, it’s a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.)
Alternative formulas like “you’ll get more of what you want if you don’t do that!” or “I prefer you not do that!” or “I and others will shame you if you do that!” don’t seem to work as well for this purpose.
But of course you’re correct that some norm-violators don’t respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.
“Should” is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It’s a distinctly human invention, and it’s meaning shifts as the user desires. Moral obligations are great for social interactions, but they don’t reflect anything deeper than an extension of tribal politics. Saying “you should x” (in the moral sense of the word) is just equivalent to saying “I would prefer you to x”, but with bonus social pressure.
Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective.
Next time you meet a child murderer, you just go and keep on telling him he shouldn’t do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds “I have to sacrifice them to the magical alien unicorns or they’ll kill my family” then I can explain to him that the magical alien unicorns dont’t exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that’s not the case, and if I don’t ask/observe to figure out what his motivations are I’ll never know how to stop him when physical force is no option.
I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between “I should X” and “I would prefer to not-X”. If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children.
My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn’t involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.)
If my guess is correct it would also explain why there’s disagreement about whether morality is objective or subjective (morality is a personal preference, but it’s also an attempt to remove personal biases—it’s by itself an attempt to move from subjective preferences to objective preferences).
That’s a good theory.
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of “should”. The original comment on this topic was claiming that “should” is a bad concept; instead of thinking “I should x” or “I shouldn’t do x”, on top of considering “I want to/don’t want to x”, just look at want/do not want. “I should x” doesn’t help you resolve “do I want to x”, and the second question is the only one that counts.
I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it’s simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature “I should x but I want to y”, stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner’s dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of “should”, you will be free from that type of trap unless it is in your best interests to remain there.
Our moral intuitions do not exist for good reasons. “Fairness” and it’s ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo “morality”, “should”, and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying “you should c because it’s right”, and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you’re allowed to like yourself more than you like other people.
These aren’t the only two possibilities. Lots of important aspects of the world are socially constructed. There’s no objective truth about the owner of a given plot of land, but it’s not purely subjective either—and if you don’t believe me, try explaining it to the judge if you are arrested for trespassing.
Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It’s perfectly coherent for somebody to say “society believes X is immoral but I don’t personally think it’s wrong”. I think it’s even coherent for somebody to say “X is immoral but I intend to do it anyway.”
You’re sneaking in connotations. “Morality” has a much stronger connotation than “things that other people think are bad for me to do.” You can’t simply define the word to mean something convenient, because the connotations won’t go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn’t change.
Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say “x is immoral” then I haven’t actually told you anything about x. In normal usage I’ve told you that I think people in general shouldn’t do x, but you don’t know why I think that unless you know my value system; you shouldn’t draw any conclusions about whether you think people should or shouldn’t x, other than due to the threat of my retaliation.
“Morality” in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying “x is morally wrong” or “x is morally right” doesn’t have any additional effect on our actions, once we’ve run the best preference algorithms we have over them. Every single bit of information contained in “morally right/wrong” is also contained in our other decision algorithms, often in a more accurate form. It’s not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
My original point was just that “subjective versus objective” is a false dichotomy in this context. I don’t want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world.
Turning now to the substance of whether moral or judgement words (“should”, “ought”, “honest”, etc) are bad concepts -- At work, we routinely have conversations about “is it ethical/honest to do X”, or “what’s the most ethical way to deal with circumstance Y”. And we do not mean “what is our private preference about outcomes or rules”—we mean something imprecise but more like “what would our peers think of us if they knew” or “what do we think our peers ought to think of us if they knew”. We aren’t being very precise how much is objective, subjective, and socially constructed, but I don’t see that we would gain from trying to speak with more precision than our thoughts actually have.
Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using ‘ethical’ instead of other terms smuggles in a lot of connotation. That’s the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively.
The original topic was “harmful” concepts, I believe, and I don’t think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn’t protect you from being wrong; you can talk all day about “is it ethical to steal this cookie” but you are wasting your time. Either you’re actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you’re babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing “is this moral”, unless what you’re really asking is “What are the social consequences” or “will person x think this is immoral” or whatever. It’s a dangerous habit epistemically and serves no instrumental purpose.
Subjectivity is part of the territory.
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
But that’s not what they mean, or at least not all that they mean.
Look, I’m a fan of Stirner and a moral subjectviist, so you don’t have to explain the nonsense people have in their heads with regard to morality to me. I’m on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.
But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It’s handy to have a label to separate those out, and “moral” is the accurate one, regardless of the other nonsense people have in their heads about morality.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
I don’t think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn’t what makes my preference for ice cream preferences different from my moral preferences.
Is there a behavioral cluster about “moral”? Sure.
How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don’t eat it? For their tolerance of a preference in ice cream in others?
Not many that I see. So yeah, it’s really different.
And matter is matter, whether alive or dead, whether your shoe or your mom.
I can’t remember where I heard the anecdote, but I remember some small boy discovering the power of “need” with “I need a cookie!”.
I think any correct use of “need” is either implicitly or explicitly a phrase of the form “I need X (in order to do Y)”.
“Deserve” is harmful because we would often rather destroy utility than allow an undeserved outcome distribution. For instance, most people would probably rather punish a criminal than reform him. I nominate “justice” as the more basic bad concept. It’s a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now, a shortcut that lets them choose actions that probably increase utility in the long run. But it is a bad concept for people who can think more rigorously.
A lot of these “bad concepts” will probably be things that are useful given limited rationality.
“Are the gods not just?”
“Oh no, child. What would become us us if they were?”
― C.S. Lewis, Till We Have Faces
I’d say “justice” is a heuristics; better than nothing, but not the best possible option.
This could be connected with their beliefs about probability of successfully reforming the criminal. I guess the probability strongly depends on the type of crime and type of treatment, and even is not the same for all classes of criminals (e.g. sociopaths vs. people in relative rare situation that overwhelmed them). They may fear that with a good lawyer, “reform, don’t punish” is simply a “get out of jail free” card.
To improve this situation, it would help to make the statistics of reform successes widely known. But I would expect that in some situations, they are just not available. This is partially an availability heuristics on my part, and partially my model saying that many good intentions fail in real life.
Also, what about unique crimes? For example, an old person murders their only child, and they do not want to have any other child, ever. Most likely, they will never do the same crime again. How specifically would you reform them? How would you measure the success of reforming them? If we are reasonably sure they never do the same thing again, even without a treatment, then… should we just shrug and let them go?
The important part of the punishment is the precommitment to punish. If a crime already happened, causing e.g. pain to the criminal does not undo the past. But if the crime is yet in the future, precommiting to cause pain to the criminal influences the criminal’s outcome matrix. Will precommitment to reforming have similar effects? (“Don’t shoot him, or… I will explain you why shooting people is wrong, and then you will feel bad about it!”)
Actually, I think that’s some of what they are keeping in mind and find motivating.
If they were able to keep it in mind separately, they could include that in their calculations, instead of using justice as a kind of sufficient statistic to summarize it.
Would you also two box on Newcomb’s problem?
You can still use precommitment, but tie it to consequences rather than to Justice. Take Edward Snowden. Say that the socially-optimal outcome is to learn about the most alarming covert government programs, but not about all covert programs. So you want some Edward Snowdens to reveal some operations, but you don’t want that to happen very often. The optimal behavior may be to precommit to injustice, punishing government employees who reveal secrets regardless of whether their actions were justified.
International espionage is probably one of the worst examples to attempt to generalize concepts like justice from. It’s probably better to start with simpler (and more common) examples like theft or murder and then use the concepts developed on the simpler examples to look at the more complicated one.
Upvoted, but I would note that it’s interesting to see a moral value listed in a (supposedly value-neutral) “bad concepts repository”. The idea that “deserve” in the sense in which you mention is a harmful and meaningless concept is a rather consequentialist notion, and seeing this so highly upvoted says something about the ethics that this community has adopted—and if I’m right in assuming that a lot of the upvoters probably thought this a purely factual confusion with no real ethical element, then it says a bit about the moral axioms that we tend to take for granted.
Again, not saying this as a criticism, just as something that I found interesting.
E.g. part of my morality used to say that if I only deserved some pleasures in case I had acted in the right ways or was good enough: and this had nothing to do with a consequentialist it-is-a-way-of-motivating-myself-to-act-right logic, it was simply an intrinsic value that I would to some extent have considered morally right to have even if possessing it was actively harmful. Somebody coming along and telling me that “in reality, your value is not grounded in any concrete mechanism” would have had me going “well, in that case your value of murder being bad is not grounded in any concrete mechanism either”. (A comment saying that “the concept of murder can be harmful, since in reality there is no mechanism for determining what’s murder” probably wouldn’t have been upvoted.)
So you’re saying we like thinking about a moral property, but we’re wrong to do so, because this property is not reliably instanciated? Desert theorist do not need to disagree—there’s no law of physics that means people necessarily get what they deserve. Rather, we are supposed to be the mechanism—we must regulate our own affairs so as to ensure that people get what they deserve.
Perhaps the bad concept here is actually “karma”, which I understand roughly to be the claim that there is a law of physics that means people necessarily get what they deserve.
I think around here we can call that the just-world fallacy.
To me deserve flows from experiencing the predicatable consequences of one’s actions. If the cultural norms for my area is to wait in line at the bank, checkout, restraunt, etc., and I do so, I deserve to be served when I reach the front of it (barring any prior actions towards the owners like theft, or personal connections). Someone who comes in later does not deserve to be served until others in the queue have been. Or, less in a less relative example, if I see dark clouds and go out dressed for warm weather when I have rain clothes at hand, I deserve to feel uncomforable. I do not deserve to be assaulted by random strangers, when I have not personally performed any actions that would initaiate conflict that violence would resolve or done anything which tends to anger other people. Of course, the certainty of getting what one deserves is not 1, and one must expect that the unexpected will happen in some context eventually.
On the flipside, egalitarian instincts (e.g. “justice and liberty for all”, “all men are created equal”) are often deemed desirable, even though many a times “deserve” stems from such concepts of how a society should supposedly be like, “what kind of society I want to live in”.
There is a tension between decrying “deserve” as harmful, while e.g. espousing the (in many cases) egalitarian instincts they stem from (“I should have as many tech toys as my neighbor”, “I’m trying to keep up with the Joneses”, etc.).
I think this is a different flavor of deserving. Stabilizer is using deserve to explain how people got into the current situation while you’re using it to describe desirable future situation. The danger is assuming that because we are capable of acting in a way that gives people what they deserve, that in all situations someone must have already done so, so everyone must have acted in such a way that they have earned their present circumstances through moral actions.
The concept of “deserve” is only harmful to the extent people apply it to things they don’t in fact deserve. In this respect, it’s no different from the concept of “truth”.
It’s part of a larger pattern of mistaking your interpretations of reality as reality itself. There’s no ephemeral labels floating around that are objectively true—you can’t talk too much, work too hard, or be pathetic. You can only say things that other people would prefer not to hear, do work to the exclusion of other objectives, or be pitied by someone.
If excessive work causes an overuse injury or illness then “worked too hard” would seem to be a legitimate way to describe reality. (Agree with the other two.)
I agree with that. I also suspect many people treat deserving of rewards and deserving of punishments as separate concepts. As a result they might reject one while staying attached to the other and become even more confused.