See here for approaches that can deal with the AIXI existence issue:
I can’t read past the abstract, but I’d find this more reassuring if it didn’t require Turing oracles.
It seems that “just pick a random language (eg C++), without adding any specific weirdness” should work to avoid the problem—but we just don’t know at this point.
My understanding is that functional languages have properties which would be useful for this sort of thing, but anyway I agree, my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
I can’t read past the abstract, but I’d find this more reassuring if it didn’t require Turing oracles.
That’s the first step.
my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
My instincts agree with your instincts, but that’s not a proof… A bit more analysis would be useful.
my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
I think it’s actually worse. If I understand correctly, corollary 14 implies that for any choice of the programming language, there exist some mixtures of environments which exhibit that problem. This means that if the environment is chosen adversarially, even by a computable adversary, AIXI is screwed.
I can’t read past the abstract, but I’d find this more reassuring if it didn’t require Turing oracles.
My understanding is that functional languages have properties which would be useful for this sort of thing, but anyway I agree, my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
That’s the first step.
My instincts agree with your instincts, but that’s not a proof… A bit more analysis would be useful.
Rigorous analysis certainly is useful, but I don’t think I’ve studied theoretical compsci at a high enough level to attempt a proof.
I think it’s actually worse. If I understand correctly, corollary 14 implies that for any choice of the programming language, there exist some mixtures of environments which exhibit that problem. This means that if the environment is chosen adversarially, even by a computable adversary, AIXI is screwed.
Hmm, well you can’t choose the laws of physics adversarially, so I think this would only be a problem in a pure virtual environment.
The laws of physics may allow for adversaries that try to manipulate you.