Then, you use it to predict the program of less than N bits of length (with N sufficiently big of course) which maximize a utility function which measure how accurate the output of that program is as a program predictor given that it generate this output in less than T steps (where T is a reasonable number given the hardware you have access to).
How do you check how “accurate” a program predictor is—if you don’t already have access to a high-quality program predictor?
You can do it in a very calculation intensive manner: take all programs of less than K bits (with K sufficiently big) calculate their answer (to avoid halting problem wait for the answer only a finite but truly enormous number of step, for exemple 3^^^3 steps) and compare it to the answer given by the program predictor. Of course you can’t do that in any reasonable amount of time, which is why you’re using the “good enough to be improved on” program predictor to predict the result of the calculation.
It sounds as though you are proposing using your approximate program predictor as a metric of how accurate a new candidate program predictor is. However, that is not going to result in any incentive to improve on the original approximate program predictor’s faults.
In fact, I’m “asking” the program predictor to find the program which generate the best program predictor. It should be noted that the program predictor do not necesserly “consider” itself perfect: if you ask it to predict how many of the programs of less than M bits it will predict correctly, it won’t necesserly say “all of them” (in fact it shouldn’t say that if it’s good enouygh to be improved on).
How do you check how “accurate” a program predictor is—if you don’t already have access to a high-quality program predictor?
You can do it in a very calculation intensive manner: take all programs of less than K bits (with K sufficiently big) calculate their answer (to avoid halting problem wait for the answer only a finite but truly enormous number of step, for exemple 3^^^3 steps) and compare it to the answer given by the program predictor. Of course you can’t do that in any reasonable amount of time, which is why you’re using the “good enough to be improved on” program predictor to predict the result of the calculation.
It sounds as though you are proposing using your approximate program predictor as a metric of how accurate a new candidate program predictor is. However, that is not going to result in any incentive to improve on the original approximate program predictor’s faults.
In fact, I’m “asking” the program predictor to find the program which generate the best program predictor. It should be noted that the program predictor do not necesserly “consider” itself perfect: if you ask it to predict how many of the programs of less than M bits it will predict correctly, it won’t necesserly say “all of them” (in fact it shouldn’t say that if it’s good enouygh to be improved on).
You are making this harder than it needs to be. General forecasting is equivalent to general stream compression. That insight offers a simple and effective quality-testing procedure—you compress the output of randomly-configured FSMs.
There’s a big existing literature about how to create compressors—it is a standard computer-science problem.
I’m not sure what you’re accusing me of making harder than it need to be.
Could you clarify?