I’m not beginning to think I identified a real flaw in this approach.
The usual formulation of UDT assumes the decision algorithm to be known. In reality, the agent doesn’t know its own decision algorithm. This means there is another flavor of uncertainty w.r.t. which the values of different choices have to be averaged. I call this “introspective uncertainty”. However, introspective uncertainty is not independent: it is strongly correlated with indexical uncertainty. Since introspective uncertainty can’t be absorbed into the utiltity function, indexical uncertainty cannot either.
I have a precise model of this kind of UDT in mind. Planning to write about it soon.
I’m not beginning to think I identified a real flaw in this approach.
The usual formulation of UDT assumes the decision algorithm to be known. In reality, the agent doesn’t know its own decision algorithm. This means there is another flavor of uncertainty w.r.t. which the values of different choices have to be averaged. I call this “introspective uncertainty”. However, introspective uncertainty is not independent: it is strongly correlated with indexical uncertainty. Since introspective uncertainty can’t be absorbed into the utiltity function, indexical uncertainty cannot either.
I have a precise model of this kind of UDT in mind. Planning to write about it soon.
I guess I should wait for you to write up your UDT model, but I do not see the purpose of introspective uncertainty.