One variable that seems like a potential choice here is a human brain (my brain for example). It’s obviously a bit of a weird choice, but I don’t see any reason to disprefer it over a Python interpreter, given that the whole point of Solomonoff induction is to define a prior and so my knowledge of physics or atoms shouldn’t really come into play when choosing a UTM.
Concretely, the UTM would be a human simulated in an empty room with an infinitely large notebook symbolizing the tape. It’s output would be an encoding of sensory data, and it’s input would be a string of instructions in english.
If we do that, then the sentence “the woman at the end of the street is a witch, she did it” suddenly becomes one of the simplest hypotheses that are available to us. Since the english sentence is so short, we now basically just need to give an encoding of who that woman is and what the action in question is, which is probably also going to be a lot shorter in human language than machine language (since our UTM already understands basic physics, society, etc.), and then our simulated human (which since Solomonoff induction doesn’t have runtime constraints can take as much time as they want) should be able to produce a prediction of historical sensory input quite well, with relatively little input.
I feel like I must be missing something in my understanding of Solomonoff induction. I have a lot more thoughts, but maybe someone else has already thought of this and can help me understand this. Some thoughts that come to mind:
I don’t know how to build a human brain, but I know how to build a machine that runs a Python interpreter. In that sense I understand a Python interpreter a lot better than I do a human brain, and using it as the basis of Solomonoff induction is more enlightening
There is a weird circularity about choosing a human brain (or your own brain in particular) as the UTC in Solomonoff induction that I can’t quite put my finger on
Maybe I am misunderstanding the Solomonoff induction formalism so that this whole construction doesn’t make any sense
[Question] Is the human brain a valid choice for the Universal Turing Machine in Solomonoff Induction?
I’ve recently been thinking about Solomonoff induction, and in particular the free choice of Universal Turing Machine.
One variable that seems like a potential choice here is a human brain (my brain for example). It’s obviously a bit of a weird choice, but I don’t see any reason to disprefer it over a Python interpreter, given that the whole point of Solomonoff induction is to define a prior and so my knowledge of physics or atoms shouldn’t really come into play when choosing a UTM.
Concretely, the UTM would be a human simulated in an empty room with an infinitely large notebook symbolizing the tape. It’s output would be an encoding of sensory data, and it’s input would be a string of instructions in english.
If we do that, then the sentence “the woman at the end of the street is a witch, she did it” suddenly becomes one of the simplest hypotheses that are available to us. Since the english sentence is so short, we now basically just need to give an encoding of who that woman is and what the action in question is, which is probably also going to be a lot shorter in human language than machine language (since our UTM already understands basic physics, society, etc.), and then our simulated human (which since Solomonoff induction doesn’t have runtime constraints can take as much time as they want) should be able to produce a prediction of historical sensory input quite well, with relatively little input.
I feel like I must be missing something in my understanding of Solomonoff induction. I have a lot more thoughts, but maybe someone else has already thought of this and can help me understand this. Some thoughts that come to mind:
I don’t know how to build a human brain, but I know how to build a machine that runs a Python interpreter. In that sense I understand a Python interpreter a lot better than I do a human brain, and using it as the basis of Solomonoff induction is more enlightening
There is a weird circularity about choosing a human brain (or your own brain in particular) as the UTC in Solomonoff induction that I can’t quite put my finger on
Maybe I am misunderstanding the Solomonoff induction formalism so that this whole construction doesn’t make any sense