I’m not sure why it’s necessary to use ‘should’ to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you’re asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there’s no confusion over the meaning of ‘should’.
As long as you can keep the terms straight, sure. EY’s argument was that using “should” in that sense makes it easier to make mistakes related to relativism.
I’m not sure why it’s necessary to use ‘should’ to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you’re asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there’s no confusion over the meaning of ‘should’.
As long as you can keep the terms straight, sure. EY’s argument was that using “should” in that sense makes it easier to make mistakes related to relativism.