EDT chokes on Simpson’s Paradox—specifically, the “Kidney Stone Treatment” example. EDT will only look at the combined data and ignore the confounding variable (the size of the kidney stones), and end up choosing the worse treatment. Which treatment you get doesn’t change whether your kidney stone is large or small, but EDT will make decisions as though it does.
I disagree. A sufficiently powerful EDT reasoner—well before the limit of AIXI—will have no problem choosing the correct action, because it is absolutely not limited to making some decision based purely on the data in that table. So no it will not “only look at the combined data and ignore ..”, as its world model will predict everything correctly. You can construct a naive EDT that is a dumb as a rock, but that is a fault only of that model, not a fault of EDT as the simple correct decision rule.
EDT chokes on Simpson’s Paradox—specifically, the “Kidney Stone Treatment” example. EDT will only look at the combined data and ignore the confounding variable (the size of the kidney stones), and end up choosing the worse treatment. Which treatment you get doesn’t change whether your kidney stone is large or small, but EDT will make decisions as though it does.
I disagree. A sufficiently powerful EDT reasoner—well before the limit of AIXI—will have no problem choosing the correct action, because it is absolutely not limited to making some decision based purely on the data in that table. So no it will not “only look at the combined data and ignore ..”, as its world model will predict everything correctly. You can construct a naive EDT that is a dumb as a rock, but that is a fault only of that model, not a fault of EDT as the simple correct decision rule.