A simple version of this is the sequence 1,2,4,8,16,.… Each term is 1 greater than what you would predict by fitting a polynomial to the preceding terms.
The polynomial restriction is an interesting twist, are you asserting that the space of considered algorithms is ALWAYS smaller than the space of actual generative algorithms one might encounter? I think that’s a restatement of the uncomputability of the “standard solution”, but I have to think more.
The non-restricted proposed method (take all algorithms, inverse weighted by program length, and remove ones that get disproven by observed sequence) likely gets 32 for the next prediction (2^n), but considers 31 as only slightly less likely (regions of a circle created by the chords of n points on the circumference), and others as less likely still.
Yes, it seems that the generated sequence will always be outside the space of considered predictors. When the predictors are polynomial, the sequence can be exponential. When the predictors are algorithms in general, the sequence is uncomputable.
Would it be possible to make a space of all possible predictors? No, because the way the sequence is generated is… kinda like a probabilistic version of Cantor’s diagonal argument—only instead of contradicting the n-th prediction in n-th step, it tries to contradict the greatest mass of remaining predictions at each step. But that still means that each individual prediction will be contradicted at some moment, because if its prior probability is p, then at each step either this specific prediction is contradicted, or some other set of predictions with total probability at least p is contradicted, so we arrive at this specific prediction after at most 1/p steps.
Also, if we want to assign a nonzero prior probability to each prediction, we can only have a countable number of predictions. But there is an uncountable number of possible sequences. Therefore some of them (actually most of them) are outside the set of considered predictions.
A simple version of this is the sequence 1,2,4,8,16,.… Each term is 1 greater than what you would predict by fitting a polynomial to the preceding terms.
The polynomial restriction is an interesting twist, are you asserting that the space of considered algorithms is ALWAYS smaller than the space of actual generative algorithms one might encounter? I think that’s a restatement of the uncomputability of the “standard solution”, but I have to think more.
The non-restricted proposed method (take all algorithms, inverse weighted by program length, and remove ones that get disproven by observed sequence) likely gets 32 for the next prediction (2^n), but considers 31 as only slightly less likely (regions of a circle created by the chords of n points on the circumference), and others as less likely still.
Yes, it seems that the generated sequence will always be outside the space of considered predictors. When the predictors are polynomial, the sequence can be exponential. When the predictors are algorithms in general, the sequence is uncomputable.
Would it be possible to make a space of all possible predictors? No, because the way the sequence is generated is… kinda like a probabilistic version of Cantor’s diagonal argument—only instead of contradicting the n-th prediction in n-th step, it tries to contradict the greatest mass of remaining predictions at each step. But that still means that each individual prediction will be contradicted at some moment, because if its prior probability is p, then at each step either this specific prediction is contradicted, or some other set of predictions with total probability at least p is contradicted, so we arrive at this specific prediction after at most 1/p steps.
Also, if we want to assign a nonzero prior probability to each prediction, we can only have a countable number of predictions. But there is an uncountable number of possible sequences. Therefore some of them (actually most of them) are outside the set of considered predictions.