What do you mean, precisely, by “SI is in the same position”?
If I understand correctly, SI can be “upgraded” by changing the underlying prior.
So if we have strong reasons suspect that we, humans, have access to the halting oracle, then we should try to build AI reasoning that approximates SI + a prior defined over the universal Turing machine enhanced with halting oracle.
If we don’t (as is the case at the moment), then we just build AI that approximates SI + the Universal prior.
Either way, there are no guessing games when we, on average, are able to beat the AI, as long as the approximation is good enough.
What do you mean, precisely, by “SI is in the same position”?
I mean that the computable part of you is facing the decision problem “which buttons on the oracle black box do I press, and how do I use the answers on the screen when choosing my next action?” and SI is facing the same decision problem. (Unless of course you want to define the problem so that only you can press buttons on the black box, and the SI agent by definition can’t. That would seem unfair to me, though.)
Given the above, in some game setups (though probably not all) I would expect SI to beat you, just like it beats you in the log-score game even when the input is uncomputable and comes from a level 100 oracle or whatever. The relevant fact is not whether the input is computable, but that SI is pretty much a multiplicative weights mixture of all possible computable “experts” which includes you, and there are many situations where a multiplicative weights mixture provably beats any expert on any input, up to a constant etc. etc.
I thought it was implied that SI cannot access any boxes or push any buttons, because it’s a mathematical abstraction. But I see that you mean “an AI agent with SI”.
What do you mean, precisely, by “SI is in the same position”?
If I understand correctly, SI can be “upgraded” by changing the underlying prior.
So if we have strong reasons suspect that we, humans, have access to the halting oracle, then we should try to build AI reasoning that approximates SI + a prior defined over the universal Turing machine enhanced with halting oracle.
If we don’t (as is the case at the moment), then we just build AI that approximates SI + the Universal prior.
Either way, there are no guessing games when we, on average, are able to beat the AI, as long as the approximation is good enough.
I mean that the computable part of you is facing the decision problem “which buttons on the oracle black box do I press, and how do I use the answers on the screen when choosing my next action?” and SI is facing the same decision problem. (Unless of course you want to define the problem so that only you can press buttons on the black box, and the SI agent by definition can’t. That would seem unfair to me, though.)
Given the above, in some game setups (though probably not all) I would expect SI to beat you, just like it beats you in the log-score game even when the input is uncomputable and comes from a level 100 oracle or whatever. The relevant fact is not whether the input is computable, but that SI is pretty much a multiplicative weights mixture of all possible computable “experts” which includes you, and there are many situations where a multiplicative weights mixture provably beats any expert on any input, up to a constant etc. etc.
I thought it was implied that SI cannot access any boxes or push any buttons, because it’s a mathematical abstraction. But I see that you mean “an AI agent with SI”.