Also “need”. There’s always another option, and pretending sufficiently bad options don’t exist can interfere with expected value estimations.
And “should” in the moralizing sense. Don’t let yourself say “I should do X”. Either do it or don’t. Yeah, you’re conflicted. If you don’t know how to resolve it on the spot, at least be honest and say “I don’t know whether I want X or not X”. As applied to others, don’t say “he should do X!”. Apparently he’s not doing X, and if you’re specific about why it is less frustrating and effective solutions are more visible. “He does X because it’s clearly in his best interests, even despite my shaming. Oh...”—or again, if you can’t figure it out, be honest about it “I have no idea why he does X”
Don’t let yourself say “I should do X”. Either do it or don’t.
That would work nice if I was so devoid of dynamic inconsistency that “I don’t feel like getting out of bed” would reliably entail “I won’t regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don’t feel like doing but I know I would regret not doing.
if you’re specific about why it is less frustrating
This is a fact about you, not about “should”. If “should” is part of the world, you shouldn’t remove it from your map just because you find other people frustrating.
and effective solutions are more visible.
One common, often effective strategy is to tell people they should do the thing.
if you can’t figure it out, be honest about it “I have no idea why he does X”
The correct response to meeting a child murderer is “No, Stop! You should not do that!”, not “Please explain why you are killing that child.” (also physical force)
This is a fact about you, not about “should”. If “should” is part of the world, you shouldn’t remove it from your map just because you find other people frustrating.
It’s not about having conveniently blank maps. It’s about having more precise maps.
I realize that you won’t be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options—and without changing your preferences. Additionally, the majority of the time this happens, “shoulding” is no longer the best choice available.
One common, often effective strategy is to tell people they should do the thing.
Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it’s short hand for “you’d get more of what you want if you do”/”I and others will shame you if you don’t”. It’s just that so often that’s not enough.
The correct response to meeting a child murderer is “No, Stop! You should not do that!”, not “Please explain why you are killing that child.” (also physical force)
And this is a good example. “Correct” responses oughtta get good results; what result do you anticipate? Surely not “Oh, sorry. didn’t realize… I’ll stop now”. It sure feels appropriate to ‘should’ here, but that’s a quirk of your psychology that focuses you on one action to the exclusion of others.
Personally, I wouldn’t “should” a murderer any more than I’d “should” a paperclip maximizer. I’d use force, threats of force and maybe even calculated persuasion. Funny enough, were I to attempt to therapy a child murderer (and bold claim here—I think I could do it), I’d start with “so why do ya kill kids?”
Mostly, the result I anticipate from “should”ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I’ve invoked by “should”ing. That is, it’s a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.)
Alternative formulas like “you’ll get more of what you want if you don’t do that!” or “I prefer you not do that!” or “I and others will shame you if you do that!” don’t seem to work as well for this purpose.
But of course you’re correct that some norm-violators don’t respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.
“Should” is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It’s a distinctly human invention, and it’s meaning shifts as the user desires. Moral obligations are great for social interactions, but they don’t reflect anything deeper than an extension of tribal politics. Saying “you should x” (in the moral sense of the word) is just equivalent to saying “I would prefer you to x”, but with bonus social pressure.
Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective.
Next time you meet a child murderer, you just go and keep on telling him he shouldn’t do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds “I have to sacrifice them to the magical alien unicorns or they’ll kill my family” then I can explain to him that the magical alien unicorns dont’t exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that’s not the case, and if I don’t ask/observe to figure out what his motivations are I’ll never know how to stop him when physical force is no option.
Saying “you should x” (in the moral sense of the word) is just equivalent to saying “I would prefer you to x”, but with bonus social pressure.
I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between “I should X” and “I would prefer to not-X”. If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children.
My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn’t involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.)
If my guess is correct it would also explain why there’s disagreement about whether morality is objective or subjective (morality is a personal preference, but it’s also an attempt to remove personal biases—it’s by itself an attempt to move from subjective preferences to objective preferences).
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of “should”. The original comment on this topic was claiming that “should” is a bad concept; instead of thinking “I should x” or “I shouldn’t do x”, on top of considering “I want to/don’t want to x”, just look at want/do not want. “I should x” doesn’t help you resolve “do I want to x”, and the second question is the only one that counts.
I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it’s simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature “I should x but I want to y”, stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner’s dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of “should”, you will be free from that type of trap unless it is in your best interests to remain there.
Our moral intuitions do not exist for good reasons. “Fairness” and it’s ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo “morality”, “should”, and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying “you should c because it’s right”, and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you’re allowed to like yourself more than you like other people.
Objective morality is meaningless, and subjective morality reduces to preferences.
These aren’t the only two possibilities. Lots of important aspects of the world are socially constructed. There’s no objective truth about the owner of a given plot of land, but it’s not purely subjective either—and if you don’t believe me, try explaining it to the judge if you are arrested for trespassing.
Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It’s perfectly coherent for somebody to say “society believes X is immoral but I don’t personally think it’s wrong”. I think it’s even coherent for somebody to say “X is immoral but I intend to do it anyway.”
You’re sneaking in connotations. “Morality” has a much stronger connotation than “things that other people think are bad for me to do.” You can’t simply define the word to mean something convenient, because the connotations won’t go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn’t change.
Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say “x is immoral” then I haven’t actually told you anything about x. In normal usage I’ve told you that I think people in general shouldn’t do x, but you don’t know why I think that unless you know my value system; you shouldn’t draw any conclusions about whether you think people should or shouldn’t x, other than due to the threat of my retaliation.
“Morality” in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying “x is morally wrong” or “x is morally right” doesn’t have any additional effect on our actions, once we’ve run the best preference algorithms we have over them. Every single bit of information contained in “morally right/wrong” is also contained in our other decision algorithms, often in a more accurate form. It’s not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
My original point was just that “subjective versus objective” is a false dichotomy in this context. I don’t want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world.
Turning now to the substance of whether moral or judgement words (“should”, “ought”, “honest”, etc) are bad concepts --
At work, we routinely have conversations about “is it ethical/honest to do X”, or “what’s the most ethical way to deal with circumstance Y”. And we do not mean “what is our private preference about outcomes or rules”—we mean something imprecise but more like “what would our peers think of us if they knew” or “what do we think our peers ought to think of us if they knew”. We aren’t being very precise how much is objective, subjective, and socially constructed, but I don’t see that we would gain from trying to speak with more precision than our thoughts actually have.
Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using ‘ethical’ instead of other terms smuggles in a lot of connotation. That’s the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively.
The original topic was “harmful” concepts, I believe, and I don’t think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn’t protect you from being wrong; you can talk all day about “is it ethical to steal this cookie” but you are wasting your time. Either you’re actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you’re babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing “is this moral”, unless what you’re really asking is “What are the social consequences” or “will person x think this is immoral” or whatever. It’s a dangerous habit epistemically and serves no instrumental purpose.
“Should” is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences.
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive.
But that’s not what they mean, or at least not all that they mean.
Look, I’m a fan of Stirner and a moral subjectviist, so you don’t have to explain the nonsense people have in their heads with regard to morality to me. I’m on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.
But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It’s handy to have a label to separate those out, and “moral” is the accurate one, regardless of the other nonsense people have in their heads about morality.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
but the very feeling that makes them seem different at all stems from the core confusion!
I don’t think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn’t what makes my preference for ice cream preferences different from my moral preferences.
Is there a behavioral cluster about “moral”? Sure.
Do moral preferences really have different implications than preferences about shoes and I’ve cream?
How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don’t eat it? For their tolerance of a preference in ice cream in others?
Not many that I see. So yeah, it’s really different.
I mean, a preference is a preference is a term in a utility function.
And matter is matter, whether alive or dead, whether your shoe or your mom.
Also “need”. There’s always another option, and pretending sufficiently bad options don’t exist can interfere with expected value estimations.
And “should” in the moralizing sense. Don’t let yourself say “I should do X”. Either do it or don’t. Yeah, you’re conflicted. If you don’t know how to resolve it on the spot, at least be honest and say “I don’t know whether I want X or not X”. As applied to others, don’t say “he should do X!”. Apparently he’s not doing X, and if you’re specific about why it is less frustrating and effective solutions are more visible. “He does X because it’s clearly in his best interests, even despite my shaming. Oh...”—or again, if you can’t figure it out, be honest about it “I have no idea why he does X”
That would work nice if I was so devoid of dynamic inconsistency that “I don’t feel like getting out of bed” would reliably entail “I won’t regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don’t feel like doing but I know I would regret not doing.
This John Holt quote is about exactly this.
This is a fact about you, not about “should”. If “should” is part of the world, you shouldn’t remove it from your map just because you find other people frustrating.
One common, often effective strategy is to tell people they should do the thing.
The correct response to meeting a child murderer is “No, Stop! You should not do that!”, not “Please explain why you are killing that child.” (also physical force)
It’s not about having conveniently blank maps. It’s about having more precise maps.
I realize that you won’t be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options—and without changing your preferences. Additionally, the majority of the time this happens, “shoulding” is no longer the best choice available.
Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it’s short hand for “you’d get more of what you want if you do”/”I and others will shame you if you don’t”. It’s just that so often that’s not enough.
And this is a good example. “Correct” responses oughtta get good results; what result do you anticipate? Surely not “Oh, sorry. didn’t realize… I’ll stop now”. It sure feels appropriate to ‘should’ here, but that’s a quirk of your psychology that focuses you on one action to the exclusion of others.
Personally, I wouldn’t “should” a murderer any more than I’d “should” a paperclip maximizer. I’d use force, threats of force and maybe even calculated persuasion. Funny enough, were I to attempt to therapy a child murderer (and bold claim here—I think I could do it), I’d start with “so why do ya kill kids?”
Mostly, the result I anticipate from “should”ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I’ve invoked by “should”ing. That is, it’s a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.)
Alternative formulas like “you’ll get more of what you want if you don’t do that!” or “I prefer you not do that!” or “I and others will shame you if you do that!” don’t seem to work as well for this purpose.
But of course you’re correct that some norm-violators don’t respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.
“Should” is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It’s a distinctly human invention, and it’s meaning shifts as the user desires. Moral obligations are great for social interactions, but they don’t reflect anything deeper than an extension of tribal politics. Saying “you should x” (in the moral sense of the word) is just equivalent to saying “I would prefer you to x”, but with bonus social pressure.
Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective.
Next time you meet a child murderer, you just go and keep on telling him he shouldn’t do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds “I have to sacrifice them to the magical alien unicorns or they’ll kill my family” then I can explain to him that the magical alien unicorns dont’t exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that’s not the case, and if I don’t ask/observe to figure out what his motivations are I’ll never know how to stop him when physical force is no option.
I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between “I should X” and “I would prefer to not-X”. If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children.
My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn’t involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.)
If my guess is correct it would also explain why there’s disagreement about whether morality is objective or subjective (morality is a personal preference, but it’s also an attempt to remove personal biases—it’s by itself an attempt to move from subjective preferences to objective preferences).
That’s a good theory.
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of “should”. The original comment on this topic was claiming that “should” is a bad concept; instead of thinking “I should x” or “I shouldn’t do x”, on top of considering “I want to/don’t want to x”, just look at want/do not want. “I should x” doesn’t help you resolve “do I want to x”, and the second question is the only one that counts.
I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it’s simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature “I should x but I want to y”, stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner’s dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of “should”, you will be free from that type of trap unless it is in your best interests to remain there.
Our moral intuitions do not exist for good reasons. “Fairness” and it’s ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo “morality”, “should”, and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying “you should c because it’s right”, and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you’re allowed to like yourself more than you like other people.
These aren’t the only two possibilities. Lots of important aspects of the world are socially constructed. There’s no objective truth about the owner of a given plot of land, but it’s not purely subjective either—and if you don’t believe me, try explaining it to the judge if you are arrested for trespassing.
Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It’s perfectly coherent for somebody to say “society believes X is immoral but I don’t personally think it’s wrong”. I think it’s even coherent for somebody to say “X is immoral but I intend to do it anyway.”
You’re sneaking in connotations. “Morality” has a much stronger connotation than “things that other people think are bad for me to do.” You can’t simply define the word to mean something convenient, because the connotations won’t go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn’t change.
Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say “x is immoral” then I haven’t actually told you anything about x. In normal usage I’ve told you that I think people in general shouldn’t do x, but you don’t know why I think that unless you know my value system; you shouldn’t draw any conclusions about whether you think people should or shouldn’t x, other than due to the threat of my retaliation.
“Morality” in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying “x is morally wrong” or “x is morally right” doesn’t have any additional effect on our actions, once we’ve run the best preference algorithms we have over them. Every single bit of information contained in “morally right/wrong” is also contained in our other decision algorithms, often in a more accurate form. It’s not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
My original point was just that “subjective versus objective” is a false dichotomy in this context. I don’t want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world.
Turning now to the substance of whether moral or judgement words (“should”, “ought”, “honest”, etc) are bad concepts -- At work, we routinely have conversations about “is it ethical/honest to do X”, or “what’s the most ethical way to deal with circumstance Y”. And we do not mean “what is our private preference about outcomes or rules”—we mean something imprecise but more like “what would our peers think of us if they knew” or “what do we think our peers ought to think of us if they knew”. We aren’t being very precise how much is objective, subjective, and socially constructed, but I don’t see that we would gain from trying to speak with more precision than our thoughts actually have.
Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using ‘ethical’ instead of other terms smuggles in a lot of connotation. That’s the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively.
The original topic was “harmful” concepts, I believe, and I don’t think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn’t protect you from being wrong; you can talk all day about “is it ethical to steal this cookie” but you are wasting your time. Either you’re actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you’re babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing “is this moral”, unless what you’re really asking is “What are the social consequences” or “will person x think this is immoral” or whatever. It’s a dangerous habit epistemically and serves no instrumental purpose.
Subjectivity is part of the territory.
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
But that’s not what they mean, or at least not all that they mean.
Look, I’m a fan of Stirner and a moral subjectviist, so you don’t have to explain the nonsense people have in their heads with regard to morality to me. I’m on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.
But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It’s handy to have a label to separate those out, and “moral” is the accurate one, regardless of the other nonsense people have in their heads about morality.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
I don’t think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn’t what makes my preference for ice cream preferences different from my moral preferences.
Is there a behavioral cluster about “moral”? Sure.
How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don’t eat it? For their tolerance of a preference in ice cream in others?
Not many that I see. So yeah, it’s really different.
And matter is matter, whether alive or dead, whether your shoe or your mom.
I can’t remember where I heard the anecdote, but I remember some small boy discovering the power of “need” with “I need a cookie!”.
I think any correct use of “need” is either implicitly or explicitly a phrase of the form “I need X (in order to do Y)”.