As far as I can tell, we don’t disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won’t necessarily find our moral arguments persuasive. I just don’t agree that the words moral and ought should be used the way you do.
To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?
As far as I can tell, we don’t disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won’t necessarily find our moral arguments persuasive. I just don’t agree that the words moral and ought should be used the way you do.
To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?