Another confusion I have is the idea of “input channel”. As I understand it there aren’t in general any input channels, there are just initial conditions. Certainly some subset of initial conditions will be equivalent to consistently sampling a deterministic universe with persistent agents in it that can determine the sampling rules and have their deliberations correlated to some significant degree with the sampling result. The measure of all such subsets combined will be, let’s just say, small. The measure of any one of them will be so much smaller.
It’s also possible for an environment to mimic the outputs of such agents embedded in an N-bit program where the true rule is an M-bit program with M>N, and where the consequences of the incorrect evaluation are very seriously detrimental. It’s also a situation with so insanely small measure that I’d happily describe the environment itself as adversarial in such a scenario.
There are far simpler scenarios in which AIXI (or any other agent model) will fail with measures that are incredibly larger, so focusing on this one seems bizarre.
Another confusion I have is the idea of “input channel”. As I understand it there aren’t in general any input channels, there are just initial conditions. Certainly some subset of initial conditions will be equivalent to consistently sampling a deterministic universe with persistent agents in it that can determine the sampling rules and have their deliberations correlated to some significant degree with the sampling result. The measure of all such subsets combined will be, let’s just say, small. The measure of any one of them will be so much smaller.
It’s also possible for an environment to mimic the outputs of such agents embedded in an N-bit program where the true rule is an M-bit program with M>N, and where the consequences of the incorrect evaluation are very seriously detrimental. It’s also a situation with so insanely small measure that I’d happily describe the environment itself as adversarial in such a scenario.
There are far simpler scenarios in which AIXI (or any other agent model) will fail with measures that are incredibly larger, so focusing on this one seems bizarre.