Humans trivially do share a utility function, since they change their beliefs consistently in response to argument.
...no offense, but I don’t think that word means what you think it means.
Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does more or less reduces to the question of whether CEV is complete, which as I’ve said is currently unknown.) Even if true, this doesn’t imply a shared utility function within any useful domain.
Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, a utility function is one that maps those states to their preference ordering for a given agent, and in between those states and our hypothetical intrinsic associations there’s layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I’ve be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we’d probably find large chunks that looked quite similar.
Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does is a question that more or less reduces to the question of whether CEV is complete, which as I’ve said is currently unknown.) If true, this does not demonstrate a shared utility function within some domain. Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, and between those states and our hypothetical intrinsic associations there’s layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I’ve be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we’d probably find large chunks that looked quite similar.
...huh?
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath, and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt. Just like, say, mathematics, or predicting the movement of planets.
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath, and that this is the standard we work by to declare something “good” or “bad”.
Now we’re getting somewhere.
First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
That being said, I’m afraid the idea that there’s some uniform set of impulses on which all existing moralities are fundamentally based is more an article of faith than a statement of fact given the present state of knowledge. There’s clearly enough unity there for some moral concepts to (e.g.) be describable in language, but that’s a relatively weak criterion. Pathology gives the idea of strong consistency a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function covering all normal people; just off the top of my head, for example, it could easily be that parts of it sum as a polynomial, or something similar, for which the coefficients vary somewhat between people or populations.
First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
Fair enough. What term would you prefer? I’ll use “morality” for now.
Pathology gives the idea a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function describing the foundational moral sentiments of all normal people.
Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.
As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail.
You wouldn’t happen to have non-anecdotal evidence that this is actually the case, would you?
What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that’s kind of a terrible example. Racists work much better.
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment.
I note that humans can empathise with pains they do not themselves feel.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
Well, yeah. It’s not the greatest example, I suppose. How about racism? That’s usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.
...no offense, but I don’t think that word means what you think it means.
Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does more or less reduces to the question of whether CEV is complete, which as I’ve said is currently unknown.) Even if true, this doesn’t imply a shared utility function within any useful domain.
Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, a utility function is one that maps those states to their preference ordering for a given agent, and in between those states and our hypothetical intrinsic associations there’s layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I’ve be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we’d probably find large chunks that looked quite similar.
Yes.
Good to know.
...huh?
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath, and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt. Just like, say, mathematics, or predicting the movement of planets.
Now we’re getting somewhere.
First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
That being said, I’m afraid the idea that there’s some uniform set of impulses on which all existing moralities are fundamentally based is more an article of faith than a statement of fact given the present state of knowledge. There’s clearly enough unity there for some moral concepts to (e.g.) be describable in language, but that’s a relatively weak criterion. Pathology gives the idea of strong consistency a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function covering all normal people; just off the top of my head, for example, it could easily be that parts of it sum as a polynomial, or something similar, for which the coefficients vary somewhat between people or populations.
Fair enough. What term would you prefer? I’ll use “morality” for now.
Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.
You wouldn’t happen to have non-anecdotal evidence that this is actually the case, would you?
What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that’s kind of a terrible example. Racists work much better.
How about “moral architecture”?
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
Moral architecture sounds good.
I note that humans can empathise with pains they do not themselves feel.
Well, yeah. It’s not the greatest example, I suppose. How about racism? That’s usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.
EDIT: I’m not actually Nazi, obviously.