First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
Fair enough. What term would you prefer? I’ll use “morality” for now.
Pathology gives the idea a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function describing the foundational moral sentiments of all normal people.
Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.
As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail.
You wouldn’t happen to have non-anecdotal evidence that this is actually the case, would you?
What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that’s kind of a terrible example. Racists work much better.
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment.
I note that humans can empathise with pains they do not themselves feel.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
Well, yeah. It’s not the greatest example, I suppose. How about racism? That’s usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.
Fair enough. What term would you prefer? I’ll use “morality” for now.
Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.
You wouldn’t happen to have non-anecdotal evidence that this is actually the case, would you?
What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that’s kind of a terrible example. Racists work much better.
How about “moral architecture”?
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
Moral architecture sounds good.
I note that humans can empathise with pains they do not themselves feel.
Well, yeah. It’s not the greatest example, I suppose. How about racism? That’s usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.
EDIT: I’m not actually Nazi, obviously.