Your objection and its evident support by the community is noted, and therefore I have deleted the post. I will read further on the decision theory and its implications, as that seems to be a likely cause of error.
However, I have read the meta-ethics sequence, and some of Eliezer’s other posts on morality, and found them unsatisfactory—they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it.
On the point of complexity of value, I was attempting to use the term ‘utility’ to describe human preferences, which would necessarily take into account complex values. If you could describe why this doesn’t work well, I would appreciate the correction.
That said, I’m not going to contend here without doing more research first (and thank you for the links), so this will be my last post on the subject.
they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it.
One thing to consider: Why do you need a reason to be moral/altruistic but not a reason to be selfish? (Or, if you do need a reason to be selfish, where does the recursion end, when you need to justify every motive in terms of another?)
On the topic of these decision theories, you might get a lot from the second half of Gary Drescher’s book Good and Real. His take isn’t quite the same thing as TDT or UDT, but it’s on the same spectrum, and the presentation is excellent.
Your objection and its evident support by the community is noted, and therefore I have deleted the post. I will read further on the decision theory and its implications, as that seems to be a likely cause of error.
However, I have read the meta-ethics sequence, and some of Eliezer’s other posts on morality, and found them unsatisfactory—they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it.
On the point of complexity of value, I was attempting to use the term ‘utility’ to describe human preferences, which would necessarily take into account complex values. If you could describe why this doesn’t work well, I would appreciate the correction.
That said, I’m not going to contend here without doing more research first (and thank you for the links), so this will be my last post on the subject.
One thing to consider: Why do you need a reason to be moral/altruistic but not a reason to be selfish? (Or, if you do need a reason to be selfish, where does the recursion end, when you need to justify every motive in terms of another?)
On the topic of these decision theories, you might get a lot from the second half of Gary Drescher’s book Good and Real. His take isn’t quite the same thing as TDT or UDT, but it’s on the same spectrum, and the presentation is excellent.