ETA: Ag, just before posting this I realized Hal Finney had already basically raised this same point on the original thread! Still, I think this expands on it a little...
You know, if Wei Dai’s alien black box halting decider scenario were to occur, I’m not sure there is any level of black-box evidence that could convince me they were telling the truth. (Note: To make later things make sense, I’m going to assume the claim is that if the program halts, it actually tells you the output, not just that it halts.)
It’s not so much that I’m committed to the Turing notion of computability—presumably opening the box should, if they’re telling us the truth, allow us to learn this new Turing-uncomputable physics; the problem is that—without hypercomputers ourselves—we don’t really have any way of verifying their claim in the first place. Of course the set of halting programs is semicomputable, so we can certainly verify its yes answers (if not quickly), but no answers can only be verified in the cases where we ourselves have precomputed the answer by proving it for that particular case (or family of cases). In short, we can verify that it’s correct on the easy cases, but it’s not clear why we should believe it’s correct on the hard cases that we actually care about it on. In other words, we can only verify it by checking it against a precomputed list of our own, and ISTM that if we precomputed it, they could have done the same.
If you’re not being careful, you could just say justify the claim with “induction”, but even without the precomputed list idea, induction also supports the hypothesis that it simply runs programs for a fixed but really long time and then reports whether they’ve halted, which doesn’t require anything uncomputable and so is more probable a priori. (The fact that Wei Dai said nothing about the computation time makes this a bit trickier, but presumably they may have computation far faster than us.)
Now if it just claimed, say, that it was only necessarily correct for programs using polynomial space, then we’d be in better shape, due to IP=PSPACE; even if we couldn’t replicate its results very fast, we could at least verify them quickly. We could actually give it hard cases, that we can’t do (quickly) by hand, and then verify that it got them right. (Except actually I’m brushing over some problems here—IIRC it’s uncomputable to determine whether a program will use polynomial space in the first place, so while it presumably doesn’t have a precomputed list of inputs for the different programs, it might well have a precomputed list of which programs below a certain length are polynomial-space in the first place! And then just run those much faster than we can. We could just make it an oracle for a single PSPACE-complete problem, but then of course there’s nothing uncomputable going on so no there’s no real problem in the first place; it could just be a really fast brute-force solver. This would allow us to verify quickly that they have much more advanced computers than us, that can solve PSPACE-complete problems in an instant, but that’s not nearly as surprising. Not sure if there’s any way to make this example really work as intended.)
When we test our own programs, we have some idea of what’s in the black box—we have some idea how they work, and we are just verifying that we haven’t screwed it up. And on those, since we have some idea of the algorithm, we can construct tricky test cases to check the parts that are most likely to screw it up. And even if we’re verifying a program from someone untrustworthy, SFAICT this is based on inferring what ways the program probably works, or what ways that look right but don’t work on hard cases someone may have come up with, or key steps it will probably rely on, or ways it might cheat, and writing test cases for those. Of course you can’t rule out arbitrarily advanced cheats this way, but we have other evidence against those—they’d take up too much space, or they’d be even harder than doing it correctly. In the case of a halting oracle, the problem there is no point where it would seem that such a ridiculous cheat would be even harder than doing it correctly.
So until the black box is opened, I’m not sure that this is a good argument against the universal prior, though I suppose it does still argue that the universal prior + Bayes doesn’t quite capture our intuitive notion of induction.
I’m not sure there is any level of black-box evidence that could convince me they were telling the truth.
I’m afraid I can’t do much better at this point than to cite Harvey Friedman’s position on this. (He posted his alien crystall ball scenario before I posted my alien black box, and obviously knows a lot more about this stuff than I do.)
I now come to the various deep questions—conceptually and technically -
that arise when attempting to make “proofs” that the Crystal Ball from the
hyperaliens is “the real thing” and that the information gleaned from its
use is “proved”.
I believe very strongly that rather subtle probabilistic arguments combined
with rather clever TMs, will show, in various “rigorous” senses, that the
Crystal Ball is the real thing (provided it is in fact the real thing).
Here are the relevant discussion threads on the Foundations of Mathematics mailing list:
In the case of a halting oracle, the problem there is no point where it would seem that such a ridiculous cheat would be even harder than doing it correctly.
Assuming the laws of physics actually does allow a halting oracle to be implemented, then at some point it would be easier to just implement it than to do these ridiculous cheats, right? As we rule out various possible cheats, that intuitively raises our credence that a halting oracle can be physically implemented, which contradicts the universal prior.
...having actually read those now, those threads didn’t seem very helpful. :-/
Assuming the laws of physics actually does allow a halting oracle to be implemented, then at some point it would be easier to just implement it than to do these ridiculous cheats, right? As we rule out various possible cheats, that intuitively raises our credence that a halting oracle can be physically implemented, which contradicts the universal prior.
Hm, indeed. Actually, it occurred to me after writing this that one thing to look at might be the size of the device, since there are, as far as we know, limits on how small you can make your computational units. No idea how you’d put that into action, though.
ETA: Ag, just before posting this I realized Hal Finney had already basically raised this same point on the original thread! Still, I think this expands on it a little...
You know, if Wei Dai’s alien black box halting decider scenario were to occur, I’m not sure there is any level of black-box evidence that could convince me they were telling the truth. (Note: To make later things make sense, I’m going to assume the claim is that if the program halts, it actually tells you the output, not just that it halts.)
It’s not so much that I’m committed to the Turing notion of computability—presumably opening the box should, if they’re telling us the truth, allow us to learn this new Turing-uncomputable physics; the problem is that—without hypercomputers ourselves—we don’t really have any way of verifying their claim in the first place. Of course the set of halting programs is semicomputable, so we can certainly verify its yes answers (if not quickly), but no answers can only be verified in the cases where we ourselves have precomputed the answer by proving it for that particular case (or family of cases). In short, we can verify that it’s correct on the easy cases, but it’s not clear why we should believe it’s correct on the hard cases that we actually care about it on. In other words, we can only verify it by checking it against a precomputed list of our own, and ISTM that if we precomputed it, they could have done the same.
If you’re not being careful, you could just say justify the claim with “induction”, but even without the precomputed list idea, induction also supports the hypothesis that it simply runs programs for a fixed but really long time and then reports whether they’ve halted, which doesn’t require anything uncomputable and so is more probable a priori. (The fact that Wei Dai said nothing about the computation time makes this a bit trickier, but presumably they may have computation far faster than us.)
Now if it just claimed, say, that it was only necessarily correct for programs using polynomial space, then we’d be in better shape, due to IP=PSPACE; even if we couldn’t replicate its results very fast, we could at least verify them quickly. We could actually give it hard cases, that we can’t do (quickly) by hand, and then verify that it got them right. (Except actually I’m brushing over some problems here—IIRC it’s uncomputable to determine whether a program will use polynomial space in the first place, so while it presumably doesn’t have a precomputed list of inputs for the different programs, it might well have a precomputed list of which programs below a certain length are polynomial-space in the first place! And then just run those much faster than we can. We could just make it an oracle for a single PSPACE-complete problem, but then of course there’s nothing uncomputable going on so no there’s no real problem in the first place; it could just be a really fast brute-force solver. This would allow us to verify quickly that they have much more advanced computers than us, that can solve PSPACE-complete problems in an instant, but that’s not nearly as surprising. Not sure if there’s any way to make this example really work as intended.)
When we test our own programs, we have some idea of what’s in the black box—we have some idea how they work, and we are just verifying that we haven’t screwed it up. And on those, since we have some idea of the algorithm, we can construct tricky test cases to check the parts that are most likely to screw it up. And even if we’re verifying a program from someone untrustworthy, SFAICT this is based on inferring what ways the program probably works, or what ways that look right but don’t work on hard cases someone may have come up with, or key steps it will probably rely on, or ways it might cheat, and writing test cases for those. Of course you can’t rule out arbitrarily advanced cheats this way, but we have other evidence against those—they’d take up too much space, or they’d be even harder than doing it correctly. In the case of a halting oracle, the problem there is no point where it would seem that such a ridiculous cheat would be even harder than doing it correctly.
So until the black box is opened, I’m not sure that this is a good argument against the universal prior, though I suppose it does still argue that the universal prior + Bayes doesn’t quite capture our intuitive notion of induction.
I’m afraid I can’t do much better at this point than to cite Harvey Friedman’s position on this. (He posted his alien crystall ball scenario before I posted my alien black box, and obviously knows a lot more about this stuff than I do.)
Here are the relevant discussion threads on the Foundations of Mathematics mailing list:
http://cs.nyu.edu/pipermail/fom/2004-February/subject.html#7934
http://cs.nyu.edu/pipermail/fom/2004-March/subject.html#8003
ETA:
Assuming the laws of physics actually does allow a halting oracle to be implemented, then at some point it would be easier to just implement it than to do these ridiculous cheats, right? As we rule out various possible cheats, that intuitively raises our credence that a halting oracle can be physically implemented, which contradicts the universal prior.
...having actually read those now, those threads didn’t seem very helpful. :-/
Hm, indeed. Actually, it occurred to me after writing this that one thing to look at might be the size of the device, since there are, as far as we know, limits on how small you can make your computational units. No idea how you’d put that into action, though.
Huh?
(15 minutes later)
Now I know what I’ll be reading for the rest of the week. Thanks!