I haven’t thought about this carefully, but much of UDT stuff bothers me because it tries to extend EDT, and thus fails whenever confounding shows up.
This is not true, and I wish you’d stop repeating it. Especially since your definition of EDT seems to involve blindly importing related historical data as if it was a literally true description of any agent’s situation, which is something that UDT doubly does not do.
Alternatively, I invite you to write a world-program for UDT that demonstrates the algorithm arriving at the wrong result due to some sort of “confounding”.
can be confounded with each other, and in fact often are (in the survivor effect example by the underlying health status). It is very easy to add confounding to anything, it is a fundamental issue you have to deal with. That was one of the points of my talk.
You need causal language to describe “the agent’s situation” (where confounders are, etc.) properly.
This is not true, and I wish you’d stop repeating it. Especially since your definition of EDT seems to involve blindly importing related historical data as if it was a literally true description of any agent’s situation, which is something that UDT doubly does not do.
Alternatively, I invite you to write a world-program for UDT that demonstrates the algorithm arriving at the wrong result due to some sort of “confounding”.
You should read that paper I linked. Chance moves by Nature in:
http://en.wikipedia.org/wiki/Extensive-form_game
can be confounded with each other, and in fact often are (in the survivor effect example by the underlying health status). It is very easy to add confounding to anything, it is a fundamental issue you have to deal with. That was one of the points of my talk.
You need causal language to describe “the agent’s situation” (where confounders are, etc.) properly.
Yes, just hold on while I read a 121 page paper on causal inference that I’m fairly sure has nothing to do with UDT.
I find it hard to believe your claim that UDT fails on confounders “because it’s like EDT” given that it never even conditions on data.
Again, show me a world-program for UDT where the algorithm gets the wrong answer due to “confounding”, and I’ll shut up.
That is too bad you chose to be snarky, you might have learned something.