I think this is essentially right (UDT might behave in just about any way, in a given situation, depending on its prior), and highlights two facts:
UDT is hugely sensitive to the prior, so the prior matters a whole lot. (It matters much more than the prior matters for something like CDT or EDT. And learning-theoretic properties such as those for the Solomonoff prior become much less helpful.)
The question “what does UDT do on a particular decision problem” is therefore much different than it is for other decision theories.
But, I think the standard practice of assuming that UDT faces a specific decision problem with no contrary effects from other stuff in its prior is basically the most useful way to ask “what does UDT do”, so long as one is aware that really, there may be other influences.
One important point which you might be pointing at here is that a contrary effect from elsewhere in the prior becomes more probable the less probable the decision problem is in the first place.
CDT and EDT are also sensitive to their prior. The difference is that it’s a more familiar routine to define their prior by idealization of the situation being considered without getting out of scope, thus ensuring that we remain close to informal expectations. When building a tractable model for UDT, it similarly makes sense to specify its prior without allowing retraction of knowledge of the situation and escaping to consideration of all possible situations (turning the prior into a model of all possible situations rather than just of this one situation).
In the case of CDT and EDT, escaping the bounds of the situation looks like a refrigerator falling from the sky on the experimental apparatus. In the case of UDT, it looks like a funding agency refusing to fund the experiment because its results wouldn’t be politically acceptable, unless it’s massaged to look right, and the agents within the experiment understand that (and have no scientific integrity). I think it’s similarly unreasonable for both kinds of details to be included in models, and it’s similarly possible for them to occur in reality.
I think this is essentially right (UDT might behave in just about any way, in a given situation, depending on its prior), and highlights two facts:
UDT is hugely sensitive to the prior, so the prior matters a whole lot. (It matters much more than the prior matters for something like CDT or EDT. And learning-theoretic properties such as those for the Solomonoff prior become much less helpful.)
The question “what does UDT do on a particular decision problem” is therefore much different than it is for other decision theories.
But, I think the standard practice of assuming that UDT faces a specific decision problem with no contrary effects from other stuff in its prior is basically the most useful way to ask “what does UDT do”, so long as one is aware that really, there may be other influences.
One important point which you might be pointing at here is that a contrary effect from elsewhere in the prior becomes more probable the less probable the decision problem is in the first place.
CDT and EDT are also sensitive to their prior. The difference is that it’s a more familiar routine to define their prior by idealization of the situation being considered without getting out of scope, thus ensuring that we remain close to informal expectations. When building a tractable model for UDT, it similarly makes sense to specify its prior without allowing retraction of knowledge of the situation and escaping to consideration of all possible situations (turning the prior into a model of all possible situations rather than just of this one situation).
In the case of CDT and EDT, escaping the bounds of the situation looks like a refrigerator falling from the sky on the experimental apparatus. In the case of UDT, it looks like a funding agency refusing to fund the experiment because its results wouldn’t be politically acceptable, unless it’s massaged to look right, and the agents within the experiment understand that (and have no scientific integrity). I think it’s similarly unreasonable for both kinds of details to be included in models, and it’s similarly possible for them to occur in reality.