I briefly describe an open (to the best of my knowledge) problem and link to cstheory.stackexchange questions with more details. The readers are invited to try solving those questions.
In the comment section of my research agenda essay, Jessica and I discussed, among other things, whether intelligence should be defined by some sort of diminishing regret condition or just by Bayes-optimality. In particular, the former approach runs into a problem when traps are allowed, whereas the latter is still perfectly well-defined.
However, one problem with Bayes-optimality is its computational complexity. Without traps, Bayes-optimality is feasible in the weak sense that there is asymptotically Bayes-optimal algorithm (e.g. PSRL: Posterior Sampling Reinforcement Learning) that runs in time polynomial in the size of the problem, if the prior is described by an explicit list of MDP transition kernels and reward functions (so, in practice it’s only feasible for a small number of small MDPs; there are also algorithms that can work for a large number of small MDPs). Hopefully this weak feasibility can be boosted into something stronger using additional assumptions about the prior/environment (e.g. hierarchical structure; presumably, deep learning somehow exploits such assumptions). However, when traps are allowed, it’s not clear that even weak feasibility is achievable. In particular, PSRL is no longer asymptotically Bayes-optimal, since it is more or less guaranteed to walk into any trap given enough time.
I condensed the problem into two succinct questions on cstheory.stackexchange (but received no answers so far). In the stochastic case, there are some computational hardness results in the literature, but they are not sufficiently strong to rule out an asymptotically Bayes-optimal algorithm (they only rule out an exactly Bayes-optimal algorithm). In the deterministic case, there are no hardness results that I know of. So, currently I would not be very surprised neither by a positive nor a negative answer to any of those questions.
EDIT: This answer resolves the question, more or less. Even in the deterministic case, the problem is NP-hard, with a bound of at least Ω(1#{hypotheses}) on the approximation accuracy.
Computational complexity of RL with traps
I briefly describe an open (to the best of my knowledge) problem and link to cstheory.stackexchange questions with more details. The readers are invited to try solving those questions.
In the comment section of my research agenda essay, Jessica and I discussed, among other things, whether intelligence should be defined by some sort of diminishing regret condition or just by Bayes-optimality. In particular, the former approach runs into a problem when traps are allowed, whereas the latter is still perfectly well-defined.
However, one problem with Bayes-optimality is its computational complexity. Without traps, Bayes-optimality is feasible in the weak sense that there is asymptotically Bayes-optimal algorithm (e.g. PSRL: Posterior Sampling Reinforcement Learning) that runs in time polynomial in the size of the problem, if the prior is described by an explicit list of MDP transition kernels and reward functions (so, in practice it’s only feasible for a small number of small MDPs; there are also algorithms that can work for a large number of small MDPs). Hopefully this weak feasibility can be boosted into something stronger using additional assumptions about the prior/environment (e.g. hierarchical structure; presumably, deep learning somehow exploits such assumptions). However, when traps are allowed, it’s not clear that even weak feasibility is achievable. In particular, PSRL is no longer asymptotically Bayes-optimal, since it is more or less guaranteed to walk into any trap given enough time.
I condensed the problem into two succinct questions on cstheory.stackexchange (but received no answers so far). In the stochastic case, there are some computational hardness results in the literature, but they are not sufficiently strong to rule out an asymptotically Bayes-optimal algorithm (they only rule out an exactly Bayes-optimal algorithm). In the deterministic case, there are no hardness results that I know of. So, currently I would not be very surprised neither by a positive nor a negative answer to any of those questions.
EDIT: This answer resolves the question, more or less. Even in the deterministic case, the problem is NP-hard, with a bound of at least Ω(1#{hypotheses}) on the approximation accuracy.