I am not sure if there is any disagreement in this comment. What you say sounds right to me. I agree that UDT does not really set us up to want to talk about “coherence” in the first place, which makes it weird to have it be formalized in term of expected utility maximization.
This does not make me think intelligent/rational agents will/should converge to having utility.
I think coherence of unclear kind is an important principle that needs a place in any decision theory, and it motivates something other than pure updatelessness. I’m not sure how your argument should survive this. The perspective of expected utility and the perspective of updatelessness both have glaring flaws, respectively unwarranted updatefulness and lack of a coherence concept. They can’t argue against each other in their incomplete forms. Expected utility is no more a mistake than updatelessness.
I am not sure if there is any disagreement in this comment. What you say sounds right to me. I agree that UDT does not really set us up to want to talk about “coherence” in the first place, which makes it weird to have it be formalized in term of expected utility maximization.
This does not make me think intelligent/rational agents will/should converge to having utility.
I think coherence of unclear kind is an important principle that needs a place in any decision theory, and it motivates something other than pure updatelessness. I’m not sure how your argument should survive this. The perspective of expected utility and the perspective of updatelessness both have glaring flaws, respectively unwarranted updatefulness and lack of a coherence concept. They can’t argue against each other in their incomplete forms. Expected utility is no more a mistake than updatelessness.