Does Eliezer claim that murder is wrong for every agent? I find it highly likely that in certain cases, an agent’s murder of some person will best satisfy that agent’s preferences.
Murder is certainly not wrong_x for every agent x—we can think of an agent with a preference for people being murdered, even itself. However, it is almost always wrong_MattSimpson and (hopefully!) almost always wrong_lukeproq. So it depends on which question your are asking. If you’re asking “is murder wrong_human for every agent?” Eliezer would say yes. If you’re asking “is murder wrong_x for every agent x?” Eliezer would say no.
(I realize it was clear to both you and me which of the two you were asking, but for the benefit of confused readers, I made sure everything was clear)
I would be very surprised if EY gave those answers to those questions.
It seems pretty fundamental to his view of morality that asking about “wrong_human” and “wrong_x” is an important mis-step.
Maybe murder isn’t always wrong, but it certainly doesn’t depend (on EY’s view, as I understand it) on the existence of an agent with a preference for people being murdered (or the absence of such an agent).
Maybe murder isn’t always wrong, but it certainly doesn’t depend (on EY’s view, as I understand it) on the existence of an agent with a preference for people being murdered (or the absence of such an agent).
That’s because for EY, “wrong” and “wrong_\human” mean the same thing. It’s semantics. When you ask “is X right or wrong?” in the every day sense of the term, you are actually asking “is X right_human or wrong_human?” But if murder is wrong_human, that doesn’t mean it’s wrong_clippy, for example. In both cases you are just checking a utility function, but different utility functions give different answers.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn’t therefore become right.
So it seems clear that at least under some circumstances, “wrong” and “wrong_human” don’t mean the same thing for EY, and that at least sometimes EY would say that “is X right or wrong?” doesn’t depend on what humans happen to want that day.
Now, if by “wrong_human” you don’t mean what humans would consider wrong the day you evaluate it, but rather what is considered wrong by humans today, then all of that is irrelevant to your claim.
In that case, yes, maybe you’re right that what you mean by “wrong_human” is also what EY means by “wrong.” But I still wouldn’t expect him to endorse the idea that what’s wrong or right depends in any way on what agents happen to prefer.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human
No one can change right_human, it’s a specific utility function. You can change the utility function that humans implement, but you can’t change right_human. That would be like changing e^x or 2 to something else. In other words, you’re right about what the metaethics posts say, and that’s what I’m saying too.
edit: or what jimrandomh said (I didn’t see his comment before I posted mine)
What if we use ‘human’ as a rigid designator for unmodified-human. Then in case aliens convert people into paperclip-maximizers, they’re no longer human, hence human_right no longer applies to them, but itself remains unchanged.
human_right still applies to them in the sense that they still should do what’s human_right. That’s the definition of should. (Remember, should refers to a specific set of terminal values, those that humans happen to have, called human_right) However, these modified humans, much like clippy, don’t care about human_right and so won’t be motivated to act based on human_right (except insofar as it helps make paperclips).
I’m not necessarily disagreeing with you because it’s a little ambiguous how you used the word “applies.” If you mean that the modified humans don’t care about human_right anymore, I agree. If you mean that the modified humans shouldn’t care about human_right, then I disagree.
I’m not sure why it’s necessary to use ‘should’ to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you’re asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there’s no confusion over the meaning of ‘should’.
As long as you can keep the terms straight, sure. EY’s argument was that using “should” in that sense makes it easier to make mistakes related to relativism.
OK. At this point I must admit I’ve lost track of why these various suggestively named utility functions are of any genuine interest, so I should probably leave it there. Thanks for clarifying.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn’t therefore become right.
In that case, we would draw a distinction between right_unmodified_human and right_modified_human, and “right” would refer to the former.
Okay, that make sense.
Does Eliezer claim that murder is wrong for every agent? I find it highly likely that in certain cases, an agent’s murder of some person will best satisfy that agent’s preferences.
Murder is certainly not wrong_x for every agent x—we can think of an agent with a preference for people being murdered, even itself. However, it is almost always wrong_MattSimpson and (hopefully!) almost always wrong_lukeproq. So it depends on which question your are asking. If you’re asking “is murder wrong_human for every agent?” Eliezer would say yes. If you’re asking “is murder wrong_x for every agent x?” Eliezer would say no.
(I realize it was clear to both you and me which of the two you were asking, but for the benefit of confused readers, I made sure everything was clear)
I would be very surprised if EY gave those answers to those questions.
It seems pretty fundamental to his view of morality that asking about “wrong_human” and “wrong_x” is an important mis-step.
Maybe murder isn’t always wrong, but it certainly doesn’t depend (on EY’s view, as I understand it) on the existence of an agent with a preference for people being murdered (or the absence of such an agent).
That’s because for EY, “wrong” and “wrong_\human” mean the same thing. It’s semantics. When you ask “is X right or wrong?” in the every day sense of the term, you are actually asking “is X right_human or wrong_human?” But if murder is wrong_human, that doesn’t mean it’s wrong_clippy, for example. In both cases you are just checking a utility function, but different utility functions give different answers.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn’t therefore become right.
So it seems clear that at least under some circumstances, “wrong” and “wrong_human” don’t mean the same thing for EY, and that at least sometimes EY would say that “is X right or wrong?” doesn’t depend on what humans happen to want that day.
Now, if by “wrong_human” you don’t mean what humans would consider wrong the day you evaluate it, but rather what is considered wrong by humans today, then all of that is irrelevant to your claim.
In that case, yes, maybe you’re right that what you mean by “wrong_human” is also what EY means by “wrong.” But I still wouldn’t expect him to endorse the idea that what’s wrong or right depends in any way on what agents happen to prefer.
No one can change right_human, it’s a specific utility function. You can change the utility function that humans implement, but you can’t change right_human. That would be like changing e^x or 2 to something else. In other words, you’re right about what the metaethics posts say, and that’s what I’m saying too.
edit: or what jimrandomh said (I didn’t see his comment before I posted mine)
What if we use ‘human’ as a rigid designator for unmodified-human. Then in case aliens convert people into paperclip-maximizers, they’re no longer human, hence human_right no longer applies to them, but itself remains unchanged.
human_right still applies to them in the sense that they still should do what’s human_right. That’s the definition of should. (Remember, should refers to a specific set of terminal values, those that humans happen to have, called human_right) However, these modified humans, much like clippy, don’t care about human_right and so won’t be motivated to act based on human_right (except insofar as it helps make paperclips).
I’m not necessarily disagreeing with you because it’s a little ambiguous how you used the word “applies.” If you mean that the modified humans don’t care about human_right anymore, I agree. If you mean that the modified humans shouldn’t care about human_right, then I disagree.
I’m not sure why it’s necessary to use ‘should’ to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you’re asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there’s no confusion over the meaning of ‘should’.
As long as you can keep the terms straight, sure. EY’s argument was that using “should” in that sense makes it easier to make mistakes related to relativism.
OK. At this point I must admit I’ve lost track of why these various suggestively named utility functions are of any genuine interest, so I should probably leave it there. Thanks for clarifying.
In that case, we would draw a distinction between right_unmodified_human and right_modified_human, and “right” would refer to the former.
Murder as I define it seems universally wrong_victim, but I doubt you could literally replace “victim” with any agent’s name.