Yeah, the usual mechanism by which more data reduces computational difficulty is by directly identifying the values some previously-latent variables. If we know the value of a variable precisely, then that’s easy to represent; the difficult-to-represent distributions are those where there’s a bunch of variables whose uncertainty is large and tightly coupled.
Think of it as a kind of (theoretical) ‘upper bound’ on the problem. None of the actual computable (i.e. on real-world computers built by humans) approximations to AIXI are very good in practice.
The AIXI thing was a joke; a Bayesian update on low-level physics with unknown initial conditions would be superexponentially slow, but it certainly isn’t uncomputable. And the distinction does matter—uncomputability usually indicates fundamental barriers even to approximation, whereas superexponential slowness does not (at least in this case).
Can we reduce the issue of “we can’t efficiently compute that update” by adding sensors?
What if we could get more data ? —— if facing such type of difficulties, I would ask that question first.
Yeah, the usual mechanism by which more data reduces computational difficulty is by directly identifying the values some previously-latent variables. If we know the value of a variable precisely, then that’s easy to represent; the difficult-to-represent distributions are those where there’s a bunch of variables whose uncertainty is large and tightly coupled.
No, he’s referring to something like performing a Bayesian update over all computable hypotheses – that’s incomputable (i.e. even in theory). It’s infinitely beyond the capabilities of even a quantum computer the size of the universe.
Think of it as a kind of (theoretical) ‘upper bound’ on the problem. None of the actual computable (i.e. on real-world computers built by humans) approximations to AIXI are very good in practice.
The AIXI thing was a joke; a Bayesian update on low-level physics with unknown initial conditions would be superexponentially slow, but it certainly isn’t uncomputable. And the distinction does matter—uncomputability usually indicates fundamental barriers even to approximation, whereas superexponential slowness does not (at least in this case).