I don’t see why this is a problem—it seems to me that constantly outputting 0 is the right thing to do in this situation. And surely any intelligence has to have priors and can run into the exact same problem?
The uncomputability problem seems far worse to me.
Uncomputable AIXI can be approximated almost arbitrarily well by computable versions. And the general problem is that “Hell” is possible in any world—take a computable version of AIXI in our world, and give it a prior that causes it to never do anything...
And surely any intelligence has to have priors and can run into the exact same problem?
This means that “pick a complexity prior” does not solve the problem of priors for active agents (though it does for passive agents) because which complexity prior we pick matters.
I know that the uncomputable AIXI assigns zero probability to its own existence—would a computable version be able to acknowledge its own existence? If not, would this cause problems involving being unable to self-modify, avoid damage, negotiate etc?
This means that “pick a complexity prior” does not solve the problem of priors for active agents (though it does for passive agents) because which complexity prior we pick matters.
Is this similar to being vulnerable to pascal’s muggings? Would programming AIXI to ignore probabilities less than, say, 10^-9, help?
Also, the problem is the prior, in that a poor choice raises the likelyhood of a particular world. Ignoring low probabilities doesn’t help, because that world will have a weirdly high probability; we need a principled way of choosing the prior.
It seems that “just pick a random language (eg C++), without adding any specific weirdness” should work to avoid the problem—but we just don’t know at this point.
See here for approaches that can deal with the AIXI existence issue:
I can’t read past the abstract, but I’d find this more reassuring if it didn’t require Turing oracles.
It seems that “just pick a random language (eg C++), without adding any specific weirdness” should work to avoid the problem—but we just don’t know at this point.
My understanding is that functional languages have properties which would be useful for this sort of thing, but anyway I agree, my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
I can’t read past the abstract, but I’d find this more reassuring if it didn’t require Turing oracles.
That’s the first step.
my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
My instincts agree with your instincts, but that’s not a proof… A bit more analysis would be useful.
my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
I think it’s actually worse. If I understand correctly, corollary 14 implies that for any choice of the programming language, there exist some mixtures of environments which exhibit that problem. This means that if the environment is chosen adversarially, even by a computable adversary, AIXI is screwed.
I don’t see why this is a problem—it seems to me that constantly outputting 0 is the right thing to do in this situation. And surely any intelligence has to have priors and can run into the exact same problem?
The uncomputability problem seems far worse to me.
Uncomputable AIXI can be approximated almost arbitrarily well by computable versions. And the general problem is that “Hell” is possible in any world—take a computable version of AIXI in our world, and give it a prior that causes it to never do anything...
This means that “pick a complexity prior” does not solve the problem of priors for active agents (though it does for passive agents) because which complexity prior we pick matters.
Provided you have access to unbounded computing power and don’t give half a damn about non-asymptotic tractability, yes.
I know that the uncomputable AIXI assigns zero probability to its own existence—would a computable version be able to acknowledge its own existence? If not, would this cause problems involving being unable to self-modify, avoid damage, negotiate etc?
Is this similar to being vulnerable to pascal’s muggings? Would programming AIXI to ignore probabilities less than, say, 10^-9, help?
See here for approaches that can deal with the AIXI existence issue: http://link.springer.com/chapter/10.1007/978-3-319-21365-1_7
Also, the problem is the prior, in that a poor choice raises the likelyhood of a particular world. Ignoring low probabilities doesn’t help, because that world will have a weirdly high probability; we need a principled way of choosing the prior.
It seems that “just pick a random language (eg C++), without adding any specific weirdness” should work to avoid the problem—but we just don’t know at this point.
I can’t read past the abstract, but I’d find this more reassuring if it didn’t require Turing oracles.
My understanding is that functional languages have properties which would be useful for this sort of thing, but anyway I agree, my instincts are that while this problem might exist, you would only actually run into it if using a language specifically designed to create this problem.
That’s the first step.
My instincts agree with your instincts, but that’s not a proof… A bit more analysis would be useful.
Rigorous analysis certainly is useful, but I don’t think I’ve studied theoretical compsci at a high enough level to attempt a proof.
I think it’s actually worse. If I understand correctly, corollary 14 implies that for any choice of the programming language, there exist some mixtures of environments which exhibit that problem. This means that if the environment is chosen adversarially, even by a computable adversary, AIXI is screwed.
Hmm, well you can’t choose the laws of physics adversarially, so I think this would only be a problem in a pure virtual environment.
The laws of physics may allow for adversaries that try to manipulate you.