This is the place where an equation could be more convincing than verbal reasoning.
To be honest, I probably wouldn’t understand the equation, so someone else would have to check it. But I feel that this is one of those situations (similar to group selectionism example), where humans can trust their reasonable sounding words, but the math could show otherwise.
I am not saying that you are wrong, at this moment I am just confused. Maybe it’s obvious, and it’s my ignorance. I don’t know, and probably won’t spend enough time to find out, so it’s unfair to demand an answer to my question. But I think the advantage of AIXI is that is a relatively simple (well, relatively to other AIs) mathematical model, so claims about what AI can or cannot do should be accompanied by equations. (And if I am completely wrong and the answer is really obvious, then perhaps the equation shouldn’t be complicated.) Also, sometimes the devil is in the details, and writing the equation could make those details explicit.
oi (observations) and ri (rewards) are the signals sent from the environment to AIXI, and ai (actions) are AIXI’s outputs. Notice that future ai are predicted by picking the one that would maximize expected reward through timestep m, just like AIXI does, and there is no summation over possible ways that the environment could make AIXI output actions computed some other way, like there is for oi and ri.
This is the place where an equation could be more convincing than verbal reasoning.
To be honest, I probably wouldn’t understand the equation, so someone else would have to check it. But I feel that this is one of those situations (similar to group selectionism example), where humans can trust their reasonable sounding words, but the math could show otherwise.
I am not saying that you are wrong, at this moment I am just confused. Maybe it’s obvious, and it’s my ignorance. I don’t know, and probably won’t spend enough time to find out, so it’s unfair to demand an answer to my question. But I think the advantage of AIXI is that is a relatively simple (well, relatively to other AIs) mathematical model, so claims about what AI can or cannot do should be accompanied by equations. (And if I am completely wrong and the answer is really obvious, then perhaps the equation shouldn’t be complicated.) Also, sometimes the devil is in the details, and writing the equation could make those details explicit.
Just look at the AIXI equation itself:
.oi (observations) and ri (rewards) are the signals sent from the environment to AIXI, and ai (actions) are AIXI’s outputs. Notice that future ai are predicted by picking the one that would maximize expected reward through timestep m, just like AIXI does, and there is no summation over possible ways that the environment could make AIXI output actions computed some other way, like there is for oi and ri.