All I’m talking about is how I compute my utility function. I’m not postulating that my way of assigning utility lines up with any absolute facts, so I don’t see how the fact that our brains were evolved is relevant.
Is there a specific part of my post that you don’t understand or that you disagree with?
I think that there may be a failure-to-communicate going on because I play Rationalist’s Taboo with words like ‘should’ and ‘right’ when I’m not talking about something technical. In my mind, these words assert the existence of an objective morality, so I wouldn’t feel comfortable using them unless everyone’s utility functions converged to the same morality—this seems really really unlikely so far.
So, instead I talk about world-states that my utility function assigns utility to. What I think that Eliezer’s trying to get at in No License To Be Human is that you shouldn’t (for the sake of not creating rendering your stated utility function inconsistent with your emotions) be a moral relativist, and that you should pursue your utility function instead of wireheading your brain to make it feel like you’re creating utility.
I think that I’ve interpreted this correctly, but I’d appreciate Eliezer telling me whether I have or not.
All I’m talking about is how I compute my utility function. I’m not postulating that my way of assigning utility lines up with any absolute facts, so I don’t see how the fact that our brains were evolved is relevant.
Is there a specific part of my post that you don’t understand or that you disagree with?
I agree with you, I disagree with Yudkowksy (or don’t understand him). By what you wrote you seem to disagree with him as well.
Could you link me to the post of Eliezer’s that you disagree with on this? I’d like to see it.
This comment, as I wrote here. I don’t understand this post.
I think that there may be a failure-to-communicate going on because I play Rationalist’s Taboo with words like ‘should’ and ‘right’ when I’m not talking about something technical. In my mind, these words assert the existence of an objective morality, so I wouldn’t feel comfortable using them unless everyone’s utility functions converged to the same morality—this seems really really unlikely so far.
So, instead I talk about world-states that my utility function assigns utility to. What I think that Eliezer’s trying to get at in No License To Be Human is that you shouldn’t (for the sake of not creating rendering your stated utility function inconsistent with your emotions) be a moral relativist, and that you should pursue your utility function instead of wireheading your brain to make it feel like you’re creating utility.
I think that I’ve interpreted this correctly, but I’d appreciate Eliezer telling me whether I have or not.