Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive.
But that’s not what they mean, or at least not all that they mean.
Look, I’m a fan of Stirner and a moral subjectviist, so you don’t have to explain the nonsense people have in their heads with regard to morality to me. I’m on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.
But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It’s handy to have a label to separate those out, and “moral” is the accurate one, regardless of the other nonsense people have in their heads about morality.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
but the very feeling that makes them seem different at all stems from the core confusion!
I don’t think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn’t what makes my preference for ice cream preferences different from my moral preferences.
Is there a behavioral cluster about “moral”? Sure.
Do moral preferences really have different implications than preferences about shoes and I’ve cream?
How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don’t eat it? For their tolerance of a preference in ice cream in others?
Not many that I see. So yeah, it’s really different.
I mean, a preference is a preference is a term in a utility function.
And matter is matter, whether alive or dead, whether your shoe or your mom.
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
But that’s not what they mean, or at least not all that they mean.
Look, I’m a fan of Stirner and a moral subjectviist, so you don’t have to explain the nonsense people have in their heads with regard to morality to me. I’m on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.
But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It’s handy to have a label to separate those out, and “moral” is the accurate one, regardless of the other nonsense people have in their heads about morality.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
I don’t think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn’t what makes my preference for ice cream preferences different from my moral preferences.
Is there a behavioral cluster about “moral”? Sure.
How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don’t eat it? For their tolerance of a preference in ice cream in others?
Not many that I see. So yeah, it’s really different.
And matter is matter, whether alive or dead, whether your shoe or your mom.