Nesov, the reason why I regard Dai’s formulation of UDT as such a significant improvement over your own is that it does not require offstage coordination. Offstage coordination requires a base theory and a privileged vantage point and, as you say, magic.
I still don’t understand this emphasis. Here I sketched in what sense I mean the global solution—it’s more about definition of preference than the actual computations and actions that the agents make (locally). There is an abstract concept of global strategy that can be characterized as being “offstage”, but there is no offstage computation or offstage coordination, and in general complete computation of global strategy isn’t performed even locally—only approximations, often approximations that make it impossible to implement the globally best solution.
In the above comment, by “magic” I referred to exact mechanism that says in what way and to what extent different agents are running the same algorithm, which is more in the domain of TDT, UDT generally not talking about separate agents, only different possible states of the same agent. Which is why neither concept solves the bargaining problem: it’s out of UDT’s domain, and TDT takes the relevant pieces of the puzzle as given, in its causal graphs.
For further disambiguation, see for example this comment you made:
We’re taking apart your “mathematical intuition” into something that invents a causal graph (this part is still magic) and a part that updates a causal graph “given that your output is Y” (Pearl says how to do this).
I still don’t understand this emphasis. Here I sketched in what sense I mean the global solution—it’s more about definition of preference than the actual computations and actions that the agents make (locally). There is an abstract concept of global strategy that can be characterized as being “offstage”, but there is no offstage computation or offstage coordination, and in general complete computation of global strategy isn’t performed even locally—only approximations, often approximations that make it impossible to implement the globally best solution.
In the above comment, by “magic” I referred to exact mechanism that says in what way and to what extent different agents are running the same algorithm, which is more in the domain of TDT, UDT generally not talking about separate agents, only different possible states of the same agent. Which is why neither concept solves the bargaining problem: it’s out of UDT’s domain, and TDT takes the relevant pieces of the puzzle as given, in its causal graphs.
For further disambiguation, see for example this comment you made: