I don’t quite understand your comment. When you say “this path leads to such unenlightening answers” what path are you referring to? If you mean the path of considering anthropic reasoning problems in the UDT framework, I don’t see why that must be unenlightening. It seems to me that we can learn something about the nature of both anthropic reasoning and preferences in UDT through such considerations.
For example, if someone has strong intuitions or arguments for or against SIA, that would seem to have certain implications about his preferences in UDT, right?
I think that Wei has a point: it is in principle possible to hold preferences and an epistemology such that, via his link, you are contradicting yourself.
For example, if you believe SIA and think that you should be a utility-maximizer, then you are committed to risking a 50% probability of killing someone to save $1, which many people may find highly counter-intuitive.
I don’t quite understand your comment. When you say “this path leads to such unenlightening answers” what path are you referring to? If you mean the path of considering anthropic reasoning problems in the UDT framework, I don’t see why that must be unenlightening. It seems to me that we can learn something about the nature of both anthropic reasoning and preferences in UDT through such considerations.
For example, if someone has strong intuitions or arguments for or against SIA, that would seem to have certain implications about his preferences in UDT, right?
I think that Wei has a point: it is in principle possible to hold preferences and an epistemology such that, via his link, you are contradicting yourself.
For example, if you believe SIA and think that you should be a utility-maximizer, then you are committed to risking a 50% probability of killing someone to save $1, which many people may find highly counter-intuitive.
Er, how does that follow?