Okay, on re-reading your post, I can be more specific. I think you make good points (obviously, because of the similarity with my article), and it would probably be well-received if submitted here in early ’09. However, there are cases where you re-treaded ground that has been discussed before without reference to the existing discussions and concepts:
The apparent contradiction in this case results from thinking about beliefs and actions as though they were separate. Arriving at a belief is an action in itself, one which can have effects on utility. One cannot, therefore, arrive at a belief about utility without considering the effects on utility that holding that belief would have. If arriving at the belief “actions are justified by their effect on utility” doesn’t maximize utility, then you shouldn’t arrive at that belief.
Here you’re describing what Wei Dai calls “computational/logical consequences” of a decision in his UDT article.
This rule requires you to hold whatever beliefs will (conditional upon them being held) lead to the best results – even when the actions those beliefs produce don’t, in themselves, maximize utility.
Here you’re describing EY’s TDT algorithm.
The applied morality becomes deontological, in the sense that actions are judged not by their effect on utility but by their adherence to the pre-set principle.
The label of deontological doesn’t quite fit here, as you don’t advocate adhering to a set of categorical “don’t do this” rules (as would be justified in a “running on corrupted hardware” case), but rather, consider a certain type of impact your decision has on the world, which itself determines what rules to follow.
Finally, I think you should have clarified that the relationship between your decision to (not) cheat and others’ decision is not a causal one (though still sufficient to motivate your decision).
I don’t think you deserved −7 (though I didn’t vote you up myself). In particular, I stand by my initial comment that, contra Vladimir, you show sufficient assimilation of the value complexity and meta-ethics sequences. I think a lot of the backlash is just from the presentation—not the format, or writing, but needing to adapt it to the terminology and insights already presented here. And I agree that you’re justified in not being convinced you’re wrong.
Hope that helps.
EDIT: You also might like this recent discussion about real-world Newcomblike problems, which I intend to come back to more rigorously
Very much, thank you. Your feedback has been a great help.
Given that others arrived at some of these conclusions before me, I can see why there would be disapproval—though I can hardly feel disappointed to have independently discovered the same answers. I think I’ll research the various models more thoroughly, refine my wording (I agree with you that using the term ‘deontology’ was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
Okay, on re-reading your post, I can be more specific. I think you make good points (obviously, because of the similarity with my article), and it would probably be well-received if submitted here in early ’09. However, there are cases where you re-treaded ground that has been discussed before without reference to the existing discussions and concepts:
Here you’re describing what Wei Dai calls “computational/logical consequences” of a decision in his UDT article.
Here you’re describing EY’s TDT algorithm.
The label of deontological doesn’t quite fit here, as you don’t advocate adhering to a set of categorical “don’t do this” rules (as would be justified in a “running on corrupted hardware” case), but rather, consider a certain type of impact your decision has on the world, which itself determines what rules to follow.
Finally, I think you should have clarified that the relationship between your decision to (not) cheat and others’ decision is not a causal one (though still sufficient to motivate your decision).
I don’t think you deserved −7 (though I didn’t vote you up myself). In particular, I stand by my initial comment that, contra Vladimir, you show sufficient assimilation of the value complexity and meta-ethics sequences. I think a lot of the backlash is just from the presentation—not the format, or writing, but needing to adapt it to the terminology and insights already presented here. And I agree that you’re justified in not being convinced you’re wrong.
Hope that helps.
EDIT: You also might like this recent discussion about real-world Newcomblike problems, which I intend to come back to more rigorously
Very much, thank you. Your feedback has been a great help.
Given that others arrived at some of these conclusions before me, I can see why there would be disapproval—though I can hardly feel disappointed to have independently discovered the same answers. I think I’ll research the various models more thoroughly, refine my wording (I agree with you that using the term ‘deontology’ was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
Great, glad to hear it! Looking forward to your next submission on this issue.