This basically says that the predictor is a rock, doesn’t depend on agent’s decision,
True, it doesn’t “depend” on the agent’s decision in the specific sense of “dependency” defined by currently-formulated UDT. The question (as with any proposed DT) is whether that’s in fact the right sense of “dependency” (between action and utility) to use for making decisions. Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.
Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.
The arguments behind UDT’s choice of dependence could prove strong enough to resolve this case as well. The fact that we are arguing about UDT’s answer in no way disqualifies UDT’s arguments.
My current position on ASP is that reasoning used in motivating it exhibits “explicit dependence bias”. I’ll need to (and probably will) write another top-level post on this topic to improve on what I’ve already written here and on the decision theory list.
True, it doesn’t “depend” on the agent’s decision in the specific sense of “dependency” defined by currently-formulated UDT. The question (as with any proposed DT) is whether that’s in fact the right sense of “dependency” (between action and utility) to use for making decisions. Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.
[EDIT: fixed typo]
The arguments behind UDT’s choice of dependence could prove strong enough to resolve this case as well. The fact that we are arguing about UDT’s answer in no way disqualifies UDT’s arguments.
My current position on ASP is that reasoning used in motivating it exhibits “explicit dependence bias”. I’ll need to (and probably will) write another top-level post on this topic to improve on what I’ve already written here and on the decision theory list.