I think with sufficiently sophisticated models essentially all of the decision theories should collapse to recommending the correct answer.
Agreed, in that I’ve made the argument that EDT (which operates on joint probability distributions) can emulate CDT (which operates on causal graphs) by adopting a particular network structure that (at additional cost) recreates the math of causal graphs. I see the EDT vs. CDT question as basically asking “does it make more sense to use joint probability distributions or causal models?” and the answer is “causal models are a more powerful language that are more closely tuned to the problem of making decisions, so use those.”
Now, perhaps there’s a way of representing the environment that’s better at encoding the decision-relevant information than causal graphs, and that using this superior structure requires upgrading to ‘next decision theory’ instead of painfully encoding that information into causal graphs. I’m fully aware of the possibility that I’m the hapless Blub programmer here, saying “but why would you ever need to do y?”, and if so I’d like to be convinced than y is actually useful.
But a part of convincing me of that, I think, is showing that the environment-belief structure used by whatever ‘next decision theory’ we’re considering is a more powerful language than causal models, and I think the traditional decision theory comparison approach of putting forward a situation and asking how reasoners using various theories would handle it is not particularly convincing at doing that.
Agreed, in that I’ve made the argument that EDT (which operates on joint probability distributions) can emulate CDT (which operates on causal graphs) by adopting a particular network structure that (at additional cost) recreates the math of causal graphs. I see the EDT vs. CDT question as basically asking “does it make more sense to use joint probability distributions or causal models?” and the answer is “causal models are a more powerful language that are more closely tuned to the problem of making decisions, so use those.”
Now, perhaps there’s a way of representing the environment that’s better at encoding the decision-relevant information than causal graphs, and that using this superior structure requires upgrading to ‘next decision theory’ instead of painfully encoding that information into causal graphs. I’m fully aware of the possibility that I’m the hapless Blub programmer here, saying “but why would you ever need to do y?”, and if so I’d like to be convinced than y is actually useful.
But a part of convincing me of that, I think, is showing that the environment-belief structure used by whatever ‘next decision theory’ we’re considering is a more powerful language than causal models, and I think the traditional decision theory comparison approach of putting forward a situation and asking how reasoners using various theories would handle it is not particularly convincing at doing that.