Your article is an excellent one, and makes many of the same points I tried to make here.
Specifically,
...in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason.
is the same idea I was trying to express with the ‘cheating student’ example, and then generalized in the final part of the post, and likewise the idea of Parfitian-filtered decision theory seems to be essentially the same as the concept in my post of ideally-rational agents adopting decision theories which make them consciously ignore their goals in order to achieve them better. (And in fact, I was planning to include in my next post how this sort of morality solves problems like Parfit’s Hitchhiker when functionally applied.)
Upon looking back on the replies here (although I have yet to read through all the decision theory posts Vladimir recommended), I realize that I haven’t been convinced that I was wrong—that there’s a flaw in my theory I haven’t seen—only that the community strongly disapproves. Given that your post and mine share many of the same ideas, and yours is at +21 while mine is at −7, I think that the differences are that a. mine was seen as presumptuous (in the vein of the ‘one great idea’), and b. I didn’t communicate clearly enough (partially because I haven’t studied enough terminology) and include answers to enough anticipated objections to overcome the resistance engendered by a. I think I also failed to clearly make the distinction between this as a normative strategy (that is, one I think ideal game-theoretic agents would follow, and a good reason for consciously deciding to be moral) and as a positive description (the reason actual human beings are moral.)
However, I recognize that even though I haven’t yet been convinced of it, there may well be a problem here that I haven’t seen but would if I knew more about decision theory. If you could explain such a problem to me, I would be genuinely grateful—I want to be correct more than I want my current theory to be right.
Okay, on re-reading your post, I can be more specific. I think you make good points (obviously, because of the similarity with my article), and it would probably be well-received if submitted here in early ’09. However, there are cases where you re-treaded ground that has been discussed before without reference to the existing discussions and concepts:
The apparent contradiction in this case results from thinking about beliefs and actions as though they were separate. Arriving at a belief is an action in itself, one which can have effects on utility. One cannot, therefore, arrive at a belief about utility without considering the effects on utility that holding that belief would have. If arriving at the belief “actions are justified by their effect on utility” doesn’t maximize utility, then you shouldn’t arrive at that belief.
Here you’re describing what Wei Dai calls “computational/logical consequences” of a decision in his UDT article.
This rule requires you to hold whatever beliefs will (conditional upon them being held) lead to the best results – even when the actions those beliefs produce don’t, in themselves, maximize utility.
Here you’re describing EY’s TDT algorithm.
The applied morality becomes deontological, in the sense that actions are judged not by their effect on utility but by their adherence to the pre-set principle.
The label of deontological doesn’t quite fit here, as you don’t advocate adhering to a set of categorical “don’t do this” rules (as would be justified in a “running on corrupted hardware” case), but rather, consider a certain type of impact your decision has on the world, which itself determines what rules to follow.
Finally, I think you should have clarified that the relationship between your decision to (not) cheat and others’ decision is not a causal one (though still sufficient to motivate your decision).
I don’t think you deserved −7 (though I didn’t vote you up myself). In particular, I stand by my initial comment that, contra Vladimir, you show sufficient assimilation of the value complexity and meta-ethics sequences. I think a lot of the backlash is just from the presentation—not the format, or writing, but needing to adapt it to the terminology and insights already presented here. And I agree that you’re justified in not being convinced you’re wrong.
Hope that helps.
EDIT: You also might like this recent discussion about real-world Newcomblike problems, which I intend to come back to more rigorously
Very much, thank you. Your feedback has been a great help.
Given that others arrived at some of these conclusions before me, I can see why there would be disapproval—though I can hardly feel disappointed to have independently discovered the same answers. I think I’ll research the various models more thoroughly, refine my wording (I agree with you that using the term ‘deontology’ was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
Thanks for the feedback. Unfortunately, the discussion on my article was dominated by a huge tangent on utility functions (which I talked about, but was done in a way irrelevant to the points I was making). I think the difference was that I plugged my points into the scenarios and literature discussed here. What bothered me about your article was that it did not carefully define the relationship between your decision theory and the ethic you are arguing for, though I will read it again to give a more precise answer.
Your article is an excellent one, and makes many of the same points I tried to make here.
Specifically,
is the same idea I was trying to express with the ‘cheating student’ example, and then generalized in the final part of the post, and likewise the idea of Parfitian-filtered decision theory seems to be essentially the same as the concept in my post of ideally-rational agents adopting decision theories which make them consciously ignore their goals in order to achieve them better. (And in fact, I was planning to include in my next post how this sort of morality solves problems like Parfit’s Hitchhiker when functionally applied.)
Upon looking back on the replies here (although I have yet to read through all the decision theory posts Vladimir recommended), I realize that I haven’t been convinced that I was wrong—that there’s a flaw in my theory I haven’t seen—only that the community strongly disapproves. Given that your post and mine share many of the same ideas, and yours is at +21 while mine is at −7, I think that the differences are that a. mine was seen as presumptuous (in the vein of the ‘one great idea’), and b. I didn’t communicate clearly enough (partially because I haven’t studied enough terminology) and include answers to enough anticipated objections to overcome the resistance engendered by a. I think I also failed to clearly make the distinction between this as a normative strategy (that is, one I think ideal game-theoretic agents would follow, and a good reason for consciously deciding to be moral) and as a positive description (the reason actual human beings are moral.)
However, I recognize that even though I haven’t yet been convinced of it, there may well be a problem here that I haven’t seen but would if I knew more about decision theory. If you could explain such a problem to me, I would be genuinely grateful—I want to be correct more than I want my current theory to be right.
Okay, on re-reading your post, I can be more specific. I think you make good points (obviously, because of the similarity with my article), and it would probably be well-received if submitted here in early ’09. However, there are cases where you re-treaded ground that has been discussed before without reference to the existing discussions and concepts:
Here you’re describing what Wei Dai calls “computational/logical consequences” of a decision in his UDT article.
Here you’re describing EY’s TDT algorithm.
The label of deontological doesn’t quite fit here, as you don’t advocate adhering to a set of categorical “don’t do this” rules (as would be justified in a “running on corrupted hardware” case), but rather, consider a certain type of impact your decision has on the world, which itself determines what rules to follow.
Finally, I think you should have clarified that the relationship between your decision to (not) cheat and others’ decision is not a causal one (though still sufficient to motivate your decision).
I don’t think you deserved −7 (though I didn’t vote you up myself). In particular, I stand by my initial comment that, contra Vladimir, you show sufficient assimilation of the value complexity and meta-ethics sequences. I think a lot of the backlash is just from the presentation—not the format, or writing, but needing to adapt it to the terminology and insights already presented here. And I agree that you’re justified in not being convinced you’re wrong.
Hope that helps.
EDIT: You also might like this recent discussion about real-world Newcomblike problems, which I intend to come back to more rigorously
Very much, thank you. Your feedback has been a great help.
Given that others arrived at some of these conclusions before me, I can see why there would be disapproval—though I can hardly feel disappointed to have independently discovered the same answers. I think I’ll research the various models more thoroughly, refine my wording (I agree with you that using the term ‘deontology’ was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
Great, glad to hear it! Looking forward to your next submission on this issue.
Thanks for the feedback. Unfortunately, the discussion on my article was dominated by a huge tangent on utility functions (which I talked about, but was done in a way irrelevant to the points I was making). I think the difference was that I plugged my points into the scenarios and literature discussed here. What bothered me about your article was that it did not carefully define the relationship between your decision theory and the ethic you are arguing for, though I will read it again to give a more precise answer.