Ultimately, this path leads to such unenlightening answers as there being a preference among programs that the agent should execute (with no uncertainty anywhere), and so choice of action is just choice among these programs, which is correct if it’s according to that preference (order) -- a “moral” decision. UDT surfaces some mistaken assumptions in the usual decision theories, mainly in situations of unusual levels of craziness, but it doesn’t actually answer any of the questions. It just heals the mistakes and pushes the buck to unspecified “preference”.
I don’t quite understand your comment. When you say “this path leads to such unenlightening answers” what path are you referring to? If you mean the path of considering anthropic reasoning problems in the UDT framework, I don’t see why that must be unenlightening. It seems to me that we can learn something about the nature of both anthropic reasoning and preferences in UDT through such considerations.
For example, if someone has strong intuitions or arguments for or against SIA, that would seem to have certain implications about his preferences in UDT, right?
I think that Wei has a point: it is in principle possible to hold preferences and an epistemology such that, via his link, you are contradicting yourself.
For example, if you believe SIA and think that you should be a utility-maximizer, then you are committed to risking a 50% probability of killing someone to save $1, which many people may find highly counter-intuitive.
Ultimately, this path leads to such unenlightening answers as there being a preference among programs that the agent should execute (with no uncertainty anywhere), and so choice of action is just choice among these programs, which is correct if it’s according to that preference (order) -- a “moral” decision. UDT surfaces some mistaken assumptions in the usual decision theories, mainly in situations of unusual levels of craziness, but it doesn’t actually answer any of the questions. It just heals the mistakes and pushes the buck to unspecified “preference”.
I don’t quite understand your comment. When you say “this path leads to such unenlightening answers” what path are you referring to? If you mean the path of considering anthropic reasoning problems in the UDT framework, I don’t see why that must be unenlightening. It seems to me that we can learn something about the nature of both anthropic reasoning and preferences in UDT through such considerations.
For example, if someone has strong intuitions or arguments for or against SIA, that would seem to have certain implications about his preferences in UDT, right?
I think that Wei has a point: it is in principle possible to hold preferences and an epistemology such that, via his link, you are contradicting yourself.
For example, if you believe SIA and think that you should be a utility-maximizer, then you are committed to risking a 50% probability of killing someone to save $1, which many people may find highly counter-intuitive.
Er, how does that follow?