It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn’t therefore become right.
So it seems clear that at least under some circumstances, “wrong” and “wrong_human” don’t mean the same thing for EY, and that at least sometimes EY would say that “is X right or wrong?” doesn’t depend on what humans happen to want that day.
Now, if by “wrong_human” you don’t mean what humans would consider wrong the day you evaluate it, but rather what is considered wrong by humans today, then all of that is irrelevant to your claim.
In that case, yes, maybe you’re right that what you mean by “wrong_human” is also what EY means by “wrong.” But I still wouldn’t expect him to endorse the idea that what’s wrong or right depends in any way on what agents happen to prefer.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human
No one can change right_human, it’s a specific utility function. You can change the utility function that humans implement, but you can’t change right_human. That would be like changing e^x or 2 to something else. In other words, you’re right about what the metaethics posts say, and that’s what I’m saying too.
edit: or what jimrandomh said (I didn’t see his comment before I posted mine)
What if we use ‘human’ as a rigid designator for unmodified-human. Then in case aliens convert people into paperclip-maximizers, they’re no longer human, hence human_right no longer applies to them, but itself remains unchanged.
human_right still applies to them in the sense that they still should do what’s human_right. That’s the definition of should. (Remember, should refers to a specific set of terminal values, those that humans happen to have, called human_right) However, these modified humans, much like clippy, don’t care about human_right and so won’t be motivated to act based on human_right (except insofar as it helps make paperclips).
I’m not necessarily disagreeing with you because it’s a little ambiguous how you used the word “applies.” If you mean that the modified humans don’t care about human_right anymore, I agree. If you mean that the modified humans shouldn’t care about human_right, then I disagree.
I’m not sure why it’s necessary to use ‘should’ to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you’re asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there’s no confusion over the meaning of ‘should’.
As long as you can keep the terms straight, sure. EY’s argument was that using “should” in that sense makes it easier to make mistakes related to relativism.
OK. At this point I must admit I’ve lost track of why these various suggestively named utility functions are of any genuine interest, so I should probably leave it there. Thanks for clarifying.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn’t therefore become right.
In that case, we would draw a distinction between right_unmodified_human and right_modified_human, and “right” would refer to the former.
It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn’t therefore become right.
So it seems clear that at least under some circumstances, “wrong” and “wrong_human” don’t mean the same thing for EY, and that at least sometimes EY would say that “is X right or wrong?” doesn’t depend on what humans happen to want that day.
Now, if by “wrong_human” you don’t mean what humans would consider wrong the day you evaluate it, but rather what is considered wrong by humans today, then all of that is irrelevant to your claim.
In that case, yes, maybe you’re right that what you mean by “wrong_human” is also what EY means by “wrong.” But I still wouldn’t expect him to endorse the idea that what’s wrong or right depends in any way on what agents happen to prefer.
No one can change right_human, it’s a specific utility function. You can change the utility function that humans implement, but you can’t change right_human. That would be like changing e^x or 2 to something else. In other words, you’re right about what the metaethics posts say, and that’s what I’m saying too.
edit: or what jimrandomh said (I didn’t see his comment before I posted mine)
What if we use ‘human’ as a rigid designator for unmodified-human. Then in case aliens convert people into paperclip-maximizers, they’re no longer human, hence human_right no longer applies to them, but itself remains unchanged.
human_right still applies to them in the sense that they still should do what’s human_right. That’s the definition of should. (Remember, should refers to a specific set of terminal values, those that humans happen to have, called human_right) However, these modified humans, much like clippy, don’t care about human_right and so won’t be motivated to act based on human_right (except insofar as it helps make paperclips).
I’m not necessarily disagreeing with you because it’s a little ambiguous how you used the word “applies.” If you mean that the modified humans don’t care about human_right anymore, I agree. If you mean that the modified humans shouldn’t care about human_right, then I disagree.
I’m not sure why it’s necessary to use ‘should’ to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you’re asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there’s no confusion over the meaning of ‘should’.
As long as you can keep the terms straight, sure. EY’s argument was that using “should” in that sense makes it easier to make mistakes related to relativism.
OK. At this point I must admit I’ve lost track of why these various suggestively named utility functions are of any genuine interest, so I should probably leave it there. Thanks for clarifying.
In that case, we would draw a distinction between right_unmodified_human and right_modified_human, and “right” would refer to the former.