can be confounded with each other, and in fact often are (in the survivor effect example by the underlying health status). It is very easy to add confounding to anything, it is a fundamental issue you have to deal with. That was one of the points of my talk.
You need causal language to describe “the agent’s situation” (where confounders are, etc.) properly.
You should read that paper I linked. Chance moves by Nature in:
http://en.wikipedia.org/wiki/Extensive-form_game
can be confounded with each other, and in fact often are (in the survivor effect example by the underlying health status). It is very easy to add confounding to anything, it is a fundamental issue you have to deal with. That was one of the points of my talk.
You need causal language to describe “the agent’s situation” (where confounders are, etc.) properly.
Yes, just hold on while I read a 121 page paper on causal inference that I’m fairly sure has nothing to do with UDT.
I find it hard to believe your claim that UDT fails on confounders “because it’s like EDT” given that it never even conditions on data.
Again, show me a world-program for UDT where the algorithm gets the wrong answer due to “confounding”, and I’ll shut up.
That is too bad you chose to be snarky, you might have learned something.