Here we are treating morality as a 1-place function. It does not accept a person as an argument, spit out whatever cognitive algorithm they use to choose between actions, and then apply that algorithm to the situation at hand.
but also says
If I define rightness to include the space of arguments that move me, then when you and I argue about what is right, we are arguing our approximations to what we would come to believe if we knew all empirical facts and had a million years to think about it—and that might be a lot closer than the present and heated argument. Or it might not.
The thrust of his argument is that for any given being, ‘right’ corresponds to some implicit function, which does not depend on who is performing the action. That function, however, may differ for different beings. So Right_human is not guaranteed to be well-defined, but Right_topynate is.
I don’t disagree. I would only add that CEV requires a large degree of agreement among each person’s implicit Right_x function. Hence me saying Right_human.
Well, I might disagree. Right_topynate isn’t guaranteed to be well defined either, but it’s more likely than Right_human.
He says
but also says
The thrust of his argument is that for any given being, ‘right’ corresponds to some implicit function, which does not depend on who is performing the action. That function, however, may differ for different beings. So Right_human is not guaranteed to be well-defined, but Right_topynate is.
I don’t disagree. I would only add that CEV requires a large degree of agreement among each person’s implicit Right_x function. Hence me saying Right_human.
Well, I might disagree. Right_topynate isn’t guaranteed to be well defined either, but it’s more likely than Right_human.