Uhh, I don’t follow this. Could you explain or link to an explanation please?
Intuitive explanation: Say it takes X bits to specify a human, and that the human knows how to correctly predict whatever sequence we’re applying SI to. SI has to find the human among the other 2^X programs of length X. Say SI is trying to predict the next bit. There will be some fraction of those 2^X programs that predict it will be 0, and some fraction predicting 1. There fractions define SI’s probabilities for what the next bit will be. Imagine the next bit will be 0. Then SI is predicting badly if greater than half of those programs predict a 1. But then, all those programs will be eliminated in the update phase. Clearly, this can happen at most X times before most of the weight of SI is on the human hypothesis(or a hypothesis that’s just as good at predicting the sequence in question)
The above is a sketch, not quite how SI really works. Rigorous bounds can be found here, in particular the bottom of page 979(“we observe that Theorem 2 implies the number of errors of the universal predictor is finite if the number of errors of the informed prior is finite...”). In the case where the number of errors is not finite, the universal and informed prior still have the same asymptotic rate of growth of error (error of universal prior is in big-O class of error of informed prior)
I don’t think this is true. I do agree some conclusions would be converged on by both systems (SI and humans), but I don’t think simplicity needs to be one of them.
When I say the ‘sense of simplicity of SI’, I use ‘simple program’ to mean the programs that SI gives the highest weight to in its predictions(these will by definition be the shortest programs that haven’t been ruled out by data). The above results imply that, if humans use their own sense of simplicity to predict things, and their predictions do well at a given task, SI will be able to learn their sense of simplicity after a bounded number of errors.
How would you ask multiple questions? Practically, you’d save the state and load that state in a new SI machine (or whatever). This means the data is part of the program.
I think you can input multiple questions by just feeding a sequence of question/answer pairs. Actually getting SI to act like a question-answering oracle is going to involve various implementation details. The above arguments are just meant to establish that SI won’t do much worse than humans at sequence prediction(of any type) -- so, to the extent that we use simplicity to attempt to predict things, SI will “learn” that sense after at most a finite number of mistakes(in particular, it won’t do any *worse* than ‘human-SI’, hypotheses ranked by the shortness of their English description, then fed to a human predictor)
Intuitive explanation: Say it takes X bits to specify a human, and that the human knows how to correctly predict whatever sequence we’re applying SI to. SI has to find the human among the other 2^X programs of length X. Say SI is trying to predict the next bit. There will be some fraction of those 2^X programs that predict it will be 0, and some fraction predicting 1. There fractions define SI’s probabilities for what the next bit will be. Imagine the next bit will be 0. Then SI is predicting badly if greater than half of those programs predict a 1. But then, all those programs will be eliminated in the update phase. Clearly, this can happen at most X times before most of the weight of SI is on the human hypothesis(or a hypothesis that’s just as good at predicting the sequence in question)
The above is a sketch, not quite how SI really works. Rigorous bounds can be found here, in particular the bottom of page 979(“we observe that Theorem 2 implies the number of errors of the universal predictor is finite if the number of errors of the informed prior is finite...”). In the case where the number of errors is not finite, the universal and informed prior still have the same asymptotic rate of growth of error (error of universal prior is in big-O class of error of informed prior)
When I say the ‘sense of simplicity of SI’, I use ‘simple program’ to mean the programs that SI gives the highest weight to in its predictions(these will by definition be the shortest programs that haven’t been ruled out by data). The above results imply that, if humans use their own sense of simplicity to predict things, and their predictions do well at a given task, SI will be able to learn their sense of simplicity after a bounded number of errors.
I think you can input multiple questions by just feeding a sequence of question/answer pairs. Actually getting SI to act like a question-answering oracle is going to involve various implementation details. The above arguments are just meant to establish that SI won’t do much worse than humans at sequence prediction(of any type) -- so, to the extent that we use simplicity to attempt to predict things, SI will “learn” that sense after at most a finite number of mistakes(in particular, it won’t do any *worse* than ‘human-SI’, hypotheses ranked by the shortness of their English description, then fed to a human predictor)