Well we can observe what answer it gives for the next case we run it on, and the next, and the next. So there is still the question of whether we expect, given that the box has passed every case we were able to test, that it will continue to give the right answer for future cases.
Remaining forever certain that the box can’t really be a halting oracle and its successes thus far have been essentially luck, no matter how many successes are accumulated? If so, you’re the first human I’ve seen express that view. Or do you have a different interpretation of how to apply Solomonoff induction to this case?
For any finite subset of Turing machines, there exists a program that will act as a halting oracle on that subset. For example, the alien box might be a Giant Look-Up Table that has the right answer for every Turing Machine up to some really, really big number. (Would we be able to tell the difference between a true halting oracle and one that has an upper bound on the size of a Turing machine that it can analyze accurately?)
Well we can observe what answer it gives for the next case we run it on, and the next, and the next. So there is still the question of whether we expect, given that the box has passed every case we were able to test, that it will continue to give the right answer for future cases.
Right—and the answers Solomonoff induction would give for such questions look pretty reasonable to me.
Remaining forever certain that the box can’t really be a halting oracle and its successes thus far have been essentially luck, no matter how many successes are accumulated? If so, you’re the first human I’ve seen express that view. Or do you have a different interpretation of how to apply Solomonoff induction to this case?
For any finite subset of Turing machines, there exists a program that will act as a halting oracle on that subset. For example, the alien box might be a Giant Look-Up Table that has the right answer for every Turing Machine up to some really, really big number. (Would we be able to tell the difference between a true halting oracle and one that has an upper bound on the size of a Turing machine that it can analyze accurately?)
Luck?!? A system that can apparently quickly and reliably tell if TMs halt would not be relying on luck.