You’ve discovered for yourself that hypercomputation is a cottage industry of dubious and contrived articles that often seem to dress conceptual misunderstandings as computational models. Computability is not by itself a diseased discipline (to use a convenient phrase from the LW stock), but hypercomputation, I feel, is a diseased little subfield within it.
Your characterization of critiques of hypercomputation as too specific is, I think, a little unfair. Most (I hesitate to say all—haven’t thought about this in many years) HC proposals I’ve seen fall into one of these two camps:
Miraculous access to infinite precision.
Miraculous access to infinite time.
Martin Davis’s article, to which you link, seems to me to do a good job of doing away with all of the first category.
You ask if there’s a general critique of hypercomputation that works against all proposals, along the lines, for example, of showing that it’d require infinite-precision measurements. I don’t think that can work, because, after all, it’s not inconceivable that the Church-Turing Thesis is false; I don’t believe that there’s agreement that it stands proved or even provable (some would disagree). Perhaps there’s a plain-old-boring way to hypercompute without whimsical GR models or miraculous oracles, but just by doing something with normal and finite resources that the Turing machine model somehow overlooked; that would be hypercomputation, and I don’t know how to rule that out.
Is there a general critique of most or all published “fancy” hypercomputation proposals, from both 1. and 2. above? I don’t know of anything published (but again, it’s been many years since I actively thought about and sought out this material). I have some thoughts that may persuade you partially, or maybe not. I’m sure they wouldn’t persuade most HC enthusiasts. I wanted once to write them up for a short essay or article, but never did (at least so far). Here’s a very short summary.
The idea is that we’re seduced by HC ideas because we fail to define clearly three things:
what is computation as a mathematical model
what is computation as a physical process
why do we trust the results of computation as a physical process
As a thought hypothesis, consider a Pentium CPU with a floating-point division bug (google for details if not familiar). The bug was discovered by a mathematician puzzled at weird-looking division results. Normally we trust without questioning that our computers perform division accurately. Most of us treat our computers or calculators as black boxes that somehow manage to compute the right results (almost nobody knows the actual division algorithm implemented inside the CPU). Yet for some reason, when the mathematician found a discrepancy between the CPU performing the division and, say, a verification by hand, and validated a discrepancy with a few repeats, they must have immediately understood that the CPU is at fault, rather than the familiar long division algorithm is at fault. The CPU normally divides much faster and much more accurately then we can by hand; why didn’t it even occur to the mathematician that maybe it’s a bug in the long division algorithm rather than the one in the CPU? The reason is that the mathematician doesn’t actually trust the CPU; they trust the algorithm of dividing two numbers, and they trust, to a lesser degree, that the CPU accurately implements the algorithm, and they trust, to a lesser degree, that there are no physical faults, cosmic rays, etc. etc. screwing up the implementation.
This and similar thought experiments may convince you that when we have a physical device doing something, and then we interpret its results (e.g. photons coming from monitor surface to your eyes) in a particular way as a result of a computation, there’re two different kinds of trust that we invest the process with: say, mathematical trust and physical trust. For mathematical trust to exist, there must be a mathematical model, an algorithm, which we may comprehend and make sense of completely mentally, and then physical trust is the degree to which we believe the physical device accurately implements that mathematical model. I propose that without the existence of this wholly mental mathematical model (often implicit existence that we take on faith) we’re not actually justified in calling the whole thing “computation”, and the bulk of the trust that we put in its results evaporates.
There seem to be some limitations on how this mental mathematical model might work; for example, some notions of discreteness and finite computational time seem required for us to comprehend the model. These may be limitations of our mind; but our notion of computation, I claim, is intrinsically linked with what our minds are capable of. If we can make explicit these bounds on how our mental mathematical models may look like, that, to me, seems our best hope of “proving” the Church-Turing Thesis.
All the hypercomputation proposals, both from category 1. and 2. above, break this correspondence between a comprehendable (in principle) mental mathematical model and a physical realization, and instead depend in one crucial way or another on some physical thing as a non-abstractable step in the process. Just as with infinite precision proposals you wouldn’t actually know where to get those arbitrary-precision decimals of a noncomputable real number in your mind, with the infinite time proposals you wouldn’t know how to perform an infinite Turing machine run in your mind. Therefore, I claim, though on the surface these hypothetical devices look just like CPUs and calculators, albeit very exotic ones, in reality they are very different, and should not be called computational devices, do not deserve the trust we put in computational devices. This lack of trust can also be demonstrated, usually, by thought experiments that involve verification or validation. E.g. say you have an exotic GR model that lets you do an infinite TM run in some place and send a signal back to you during the run if the machine stopped, etc. Say you didn’t get a signal. How do you know that the machine never stopped, as opposed to something going physically wrong with the machine, the signal getting lost on the way etc.? You may retort, but how do you know that there wasn’t a cosmic ray in the CPU when it divided etc., but there’s a difference. With the CPU there’s a mental model which gets the bulk of your trust, and you only trust the atoms of matter insofar as they correspond to it. If the atoms should fail you, nevermind, you’ll find other atoms that work better, the model is intact. With the GR hypercomputation, you have to trust the signal ray itself, trust the atoms rather than the bits in your mind. I don’t think we’d be justified in doing that.
(Quantum computation is wholly exempt from this reasoning; from computability’s point of view, it’s just a nifty implementation trick to construct a physical device that executes some particular very well-defined mental algorithms in a massively parallel way, achieving a vast speedup).
You’ve discovered for yourself that hypercomputation is a cottage industry of dubious and contrived articles that often seem to dress conceptual misunderstandings as computational models. Computability is not by itself a diseased discipline (to use a convenient phrase from the LW stock), but hypercomputation, I feel, is a diseased little subfield within it.
Your characterization of critiques of hypercomputation as too specific is, I think, a little unfair. Most (I hesitate to say all—haven’t thought about this in many years) HC proposals I’ve seen fall into one of these two camps:
Miraculous access to infinite precision.
Miraculous access to infinite time.
Martin Davis’s article, to which you link, seems to me to do a good job of doing away with all of the first category.
You ask if there’s a general critique of hypercomputation that works against all proposals, along the lines, for example, of showing that it’d require infinite-precision measurements. I don’t think that can work, because, after all, it’s not inconceivable that the Church-Turing Thesis is false; I don’t believe that there’s agreement that it stands proved or even provable (some would disagree). Perhaps there’s a plain-old-boring way to hypercompute without whimsical GR models or miraculous oracles, but just by doing something with normal and finite resources that the Turing machine model somehow overlooked; that would be hypercomputation, and I don’t know how to rule that out.
Is there a general critique of most or all published “fancy” hypercomputation proposals, from both 1. and 2. above? I don’t know of anything published (but again, it’s been many years since I actively thought about and sought out this material). I have some thoughts that may persuade you partially, or maybe not. I’m sure they wouldn’t persuade most HC enthusiasts. I wanted once to write them up for a short essay or article, but never did (at least so far). Here’s a very short summary.
The idea is that we’re seduced by HC ideas because we fail to define clearly three things:
what is computation as a mathematical model
what is computation as a physical process
why do we trust the results of computation as a physical process
As a thought hypothesis, consider a Pentium CPU with a floating-point division bug (google for details if not familiar). The bug was discovered by a mathematician puzzled at weird-looking division results. Normally we trust without questioning that our computers perform division accurately. Most of us treat our computers or calculators as black boxes that somehow manage to compute the right results (almost nobody knows the actual division algorithm implemented inside the CPU). Yet for some reason, when the mathematician found a discrepancy between the CPU performing the division and, say, a verification by hand, and validated a discrepancy with a few repeats, they must have immediately understood that the CPU is at fault, rather than the familiar long division algorithm is at fault. The CPU normally divides much faster and much more accurately then we can by hand; why didn’t it even occur to the mathematician that maybe it’s a bug in the long division algorithm rather than the one in the CPU? The reason is that the mathematician doesn’t actually trust the CPU; they trust the algorithm of dividing two numbers, and they trust, to a lesser degree, that the CPU accurately implements the algorithm, and they trust, to a lesser degree, that there are no physical faults, cosmic rays, etc. etc. screwing up the implementation.
This and similar thought experiments may convince you that when we have a physical device doing something, and then we interpret its results (e.g. photons coming from monitor surface to your eyes) in a particular way as a result of a computation, there’re two different kinds of trust that we invest the process with: say, mathematical trust and physical trust. For mathematical trust to exist, there must be a mathematical model, an algorithm, which we may comprehend and make sense of completely mentally, and then physical trust is the degree to which we believe the physical device accurately implements that mathematical model. I propose that without the existence of this wholly mental mathematical model (often implicit existence that we take on faith) we’re not actually justified in calling the whole thing “computation”, and the bulk of the trust that we put in its results evaporates.
There seem to be some limitations on how this mental mathematical model might work; for example, some notions of discreteness and finite computational time seem required for us to comprehend the model. These may be limitations of our mind; but our notion of computation, I claim, is intrinsically linked with what our minds are capable of. If we can make explicit these bounds on how our mental mathematical models may look like, that, to me, seems our best hope of “proving” the Church-Turing Thesis.
All the hypercomputation proposals, both from category 1. and 2. above, break this correspondence between a comprehendable (in principle) mental mathematical model and a physical realization, and instead depend in one crucial way or another on some physical thing as a non-abstractable step in the process. Just as with infinite precision proposals you wouldn’t actually know where to get those arbitrary-precision decimals of a noncomputable real number in your mind, with the infinite time proposals you wouldn’t know how to perform an infinite Turing machine run in your mind. Therefore, I claim, though on the surface these hypothetical devices look just like CPUs and calculators, albeit very exotic ones, in reality they are very different, and should not be called computational devices, do not deserve the trust we put in computational devices. This lack of trust can also be demonstrated, usually, by thought experiments that involve verification or validation. E.g. say you have an exotic GR model that lets you do an infinite TM run in some place and send a signal back to you during the run if the machine stopped, etc. Say you didn’t get a signal. How do you know that the machine never stopped, as opposed to something going physically wrong with the machine, the signal getting lost on the way etc.? You may retort, but how do you know that there wasn’t a cosmic ray in the CPU when it divided etc., but there’s a difference. With the CPU there’s a mental model which gets the bulk of your trust, and you only trust the atoms of matter insofar as they correspond to it. If the atoms should fail you, nevermind, you’ll find other atoms that work better, the model is intact. With the GR hypercomputation, you have to trust the signal ray itself, trust the atoms rather than the bits in your mind. I don’t think we’d be justified in doing that.
(Quantum computation is wholly exempt from this reasoning; from computability’s point of view, it’s just a nifty implementation trick to construct a physical device that executes some particular very well-defined mental algorithms in a massively parallel way, achieving a vast speedup).