You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data. You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
When you pretend that you do not know something that you actually know—you systematically get wrong results.
Eh, no? Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data.
Then AIXI has the relevant information, while I do not.
You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
A probabilistic model describes knowledge state of an observer and naturally changes when the knowledge state of the observer changes. My ability or inability to extract some information obviously affects which model is appropriate for the problem.
Suppose a coin is tossed and then the outcome is written in Japanese on a piece of paper and this piece of paper is shown to you. Whether or not your credence in the state of the coin changes from equiprobable prior depends on whether you know Japanese or not.
Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
Of course you can. But this way of thinking would be sub-optimal in the situation where you actually has extra information.
You have all relevant information tho. I’m pretty sure AIXI can predict coin toss if it has access to your vision field and proprioception data. You can’t compute outcome from this, but probability theory shouldn’t change from the fact that you can’t properly compute update.
Eh, no? Usually I can pretty much sensibly predict what I would think if I didn’t have some piece of information.
Then AIXI has the relevant information, while I do not.
A probabilistic model describes knowledge state of an observer and naturally changes when the knowledge state of the observer changes. My ability or inability to extract some information obviously affects which model is appropriate for the problem.
Suppose a coin is tossed and then the outcome is written in Japanese on a piece of paper and this piece of paper is shown to you. Whether or not your credence in the state of the coin changes from equiprobable prior depends on whether you know Japanese or not.
Of course you can. But this way of thinking would be sub-optimal in the situation where you actually has extra information.