I didn’t mean to suggest that the possibility of hypercomputers should be taken seriously as a physical hypothesis, or at least, any more seriously than time machines, perpetual motion machines, faster-than-light, etc. And I think it’s similarly irrelevant to the study of intelligence, machine or human. But in my thought experiment, the way I imagined it working was that, whenever the device’s universal-Turing-machine emulator halted, you could then examine its internal state as thoroughly as you liked, to make sure everything was consistent with the hypothesis that it worked as specified (and the non-halting case could be ascertained by the presence of pixie dust 🙂). But since its memory contents upon halting could be arbitrarily large, in practice you wouldn’t be able to examine it fully even for individual computations of sufficient complexity. Still, if you did enough consistency checks on enough different kinds of computations, and the cleverest scientists couldn’t come up with a test that the machine didn’t pass, I think believing that the machine was a true halting-problem oracle would be empirically justified.
It’s true that a black box oracle could output a nonstandard “counterfeit” halting function which claimed that some actually non-halting TMs do halt, only for TMs that can’t be proved to halt within ZFC or any other plausible axiomatic foundation humans ever come up with, in which case we would never know that it was lying to us. It would be trickier for the device I described to pull off such a deception, because it would have to actually halt and show us its output in such cases. For example, if it claimed that some actually non-halting TM M halted, we could feed it a program that emulated M and output the number of steps M took to halt. That program would also have to halt, and output some specific number n. In principle, we could then try emulating M for n steps on a regular computer, observe that M hadn’t reached a halting state, and conclude that the device was lying to us. If n were large enough, that wouldn’t be feasible, but it’s a decisive test that a normal computer could execute in principle. I suppose my magical device could instead do something like leave an infinite output string in memory, that a normal computer would never know was infinite, because it could only ever examine finitely much of it. But finite resource bounds already prevent us from completely ruling out far-fetched hypotheses about even normal computers. We’ll never be able to test, e.g., an arbitrary-precision integer comparison function on all inputs that could feasibly be written down. Can we be sure it always returns a Boolean value, and never returns the Warner Brothers dancing frog?
Actually, hypothesizing that my device “computed” a nonstandard version of the halting function would already be sort of self-defeating from a standpoint of skepticism about hypercomputation, because all nonstandard models of Peano arithmetic are known to be uncomputable. A better skeptical hypothesis would be that the device passed off some actually halting TMs as non-halting, but only in cases where the shortest proof that any of those TMs would have halted eventually was too long for humans to have discovered yet. I don’t know enough about Solomonoff induction to say whether it would unduly privilege such hypotheses over the hypothesis that the device was a true hypercomputer (if it could even entertain such a hypothesis). Intuitively, though, it seems to me that, if you went long enough without finding proof that the device wasn’t a true hypercomputer, continuing to insist that such proof would be found at some future time would start to sound like a God-of-the-gaps argument. I think this reasoning is valid even in a hypothetical universe in which human brains couldn’t do anything Turing machines can’t do, but other physical systems could. I admit that’s a nontrivial, contestable conclusion. I’m just going on intuition here.
Nearly everything you said here was already addressed in my previous comment. Perhaps I didn’t explain myself clearly?
It would be trickier for the device I described to pull off such a deception, because it would have to actually halt and show us its output in such cases.
I wrote before that “I wonder how would you tell whether it is the hypercomputer you imagine it to be, versus the realization of the same hypercomputer in some non-standard model of ZFC?”
So, the realization of a particular hypercomputer in a non-standard model of ZFC would pass all of your tests. You could examine its internal state or its output any way you like (i.e. ask any question that can be formulated in the language of ZFC) and everything you see would be consistent with ZFC. The number of steps for a machine that shouldn’t halt would be a non-standard number, so it would not fit on any finite storage. You could examine some finite subset of its digits (either from the end or from the beginning), for example, but that would not tell you the number is non-standard. For any question of the form “is n larger than some known number n0?” the answer would always be “yes”.
But finite resource bounds already prevent us from completely ruling out far-fetched hypotheses about even normal computers. We’ll never be able to test, e.g., an arbitrary-precision integer comparison function on all inputs that could feasibly be written down. Can we be sure it always returns a Boolean value, and never returns the Warner Brothers dancing frog?
Once again, there is a difference of principle. I wrote before that: ”...given an uncomputable function h and a system under test f, there is no sequence of computable tests that will allow you to form some credence about the hypothesis f=h s.t. this credence will converge to 1 when the hypothesis is true and 0 when the hypothesis is false. (This can be made an actual theorem.) This is different from the situation with normal computers (i.e. computable h) when you can devise such a sequence of tests.”
So, with normal computers you can become increasingly certain your hypothesis regarding the computer is true (even if you never become literally 100% certain, except in the limit), whereas with a hypercomputer you cannot.
Actually, hypothesizing that my device “computed” a nonstandard version of the halting function would already be sort of self-defeating from a standpoint of skepticism about hypercomputation, because all nonstandard models of Peano arithmetic are known to be uncomputable.
Yes, I already wrote that: “Although you can in principle have a class of uncomputable hypotheses s.t. you can asymptotically verify f is in the class, for example the class of all functions h s.t. it is consistent with ZFC that h is the halting function. But the verification would be extremely slow and relatively parsimonious competing hypotheses would remain plausible for an extremely (uncomputably) long time. In any case, notice that the class itself has, in some strong sense, a computable description: specifically, the computable verification procedure itself.”
So, yes, you could theoretically become certain the device is a hypercomputer (although reaching high certainly would take very long time), without knowing precisely which hypercomputer it is, but that doesn’t mean you need to add non-computable hypotheses to your “prior”, since that knowledge would still be expressible as a computable property of the world.
I don’t know enough about Solomonoff induction to say whether it would unduly privilege such hypotheses over the hypothesis that the device was a true hypercomputer (if it could even entertain such a hypothesis).
Literal Solomonoff induction (or even bounded versions of Solomonoff induction) is probably not the ultimate “true” model of induction, I was just using it as a simple example before. The true model will allow expressing hypotheses such as “all the even-numbered bits in the sequence are 1”, which involve computable properties of the environment that do not specify it completely. Making this idea precise is somewhat technical.
I didn’t mean to suggest that the possibility of hypercomputers should be taken seriously as a physical hypothesis, or at least, any more seriously than time machines, perpetual motion machines, faster-than-light, etc. And I think it’s similarly irrelevant to the study of intelligence, machine or human. But in my thought experiment, the way I imagined it working was that, whenever the device’s universal-Turing-machine emulator halted, you could then examine its internal state as thoroughly as you liked, to make sure everything was consistent with the hypothesis that it worked as specified (and the non-halting case could be ascertained by the presence of pixie dust 🙂). But since its memory contents upon halting could be arbitrarily large, in practice you wouldn’t be able to examine it fully even for individual computations of sufficient complexity. Still, if you did enough consistency checks on enough different kinds of computations, and the cleverest scientists couldn’t come up with a test that the machine didn’t pass, I think believing that the machine was a true halting-problem oracle would be empirically justified.
It’s true that a black box oracle could output a nonstandard “counterfeit” halting function which claimed that some actually non-halting TMs do halt, only for TMs that can’t be proved to halt within ZFC or any other plausible axiomatic foundation humans ever come up with, in which case we would never know that it was lying to us. It would be trickier for the device I described to pull off such a deception, because it would have to actually halt and show us its output in such cases. For example, if it claimed that some actually non-halting TM M halted, we could feed it a program that emulated M and output the number of steps M took to halt. That program would also have to halt, and output some specific number n. In principle, we could then try emulating M for n steps on a regular computer, observe that M hadn’t reached a halting state, and conclude that the device was lying to us. If n were large enough, that wouldn’t be feasible, but it’s a decisive test that a normal computer could execute in principle. I suppose my magical device could instead do something like leave an infinite output string in memory, that a normal computer would never know was infinite, because it could only ever examine finitely much of it. But finite resource bounds already prevent us from completely ruling out far-fetched hypotheses about even normal computers. We’ll never be able to test, e.g., an arbitrary-precision integer comparison function on all inputs that could feasibly be written down. Can we be sure it always returns a Boolean value, and never returns the Warner Brothers dancing frog?
Actually, hypothesizing that my device “computed” a nonstandard version of the halting function would already be sort of self-defeating from a standpoint of skepticism about hypercomputation, because all nonstandard models of Peano arithmetic are known to be uncomputable. A better skeptical hypothesis would be that the device passed off some actually halting TMs as non-halting, but only in cases where the shortest proof that any of those TMs would have halted eventually was too long for humans to have discovered yet. I don’t know enough about Solomonoff induction to say whether it would unduly privilege such hypotheses over the hypothesis that the device was a true hypercomputer (if it could even entertain such a hypothesis). Intuitively, though, it seems to me that, if you went long enough without finding proof that the device wasn’t a true hypercomputer, continuing to insist that such proof would be found at some future time would start to sound like a God-of-the-gaps argument. I think this reasoning is valid even in a hypothetical universe in which human brains couldn’t do anything Turing machines can’t do, but other physical systems could. I admit that’s a nontrivial, contestable conclusion. I’m just going on intuition here.
Nearly everything you said here was already addressed in my previous comment. Perhaps I didn’t explain myself clearly?
I wrote before that “I wonder how would you tell whether it is the hypercomputer you imagine it to be, versus the realization of the same hypercomputer in some non-standard model of ZFC?”
So, the realization of a particular hypercomputer in a non-standard model of ZFC would pass all of your tests. You could examine its internal state or its output any way you like (i.e. ask any question that can be formulated in the language of ZFC) and everything you see would be consistent with ZFC. The number of steps for a machine that shouldn’t halt would be a non-standard number, so it would not fit on any finite storage. You could examine some finite subset of its digits (either from the end or from the beginning), for example, but that would not tell you the number is non-standard. For any question of the form “is n larger than some known number n0?” the answer would always be “yes”.
Once again, there is a difference of principle. I wrote before that: ”...given an uncomputable function h and a system under test f, there is no sequence of computable tests that will allow you to form some credence about the hypothesis f=h s.t. this credence will converge to 1 when the hypothesis is true and 0 when the hypothesis is false. (This can be made an actual theorem.) This is different from the situation with normal computers (i.e. computable h) when you can devise such a sequence of tests.”
So, with normal computers you can become increasingly certain your hypothesis regarding the computer is true (even if you never become literally 100% certain, except in the limit), whereas with a hypercomputer you cannot.
Yes, I already wrote that: “Although you can in principle have a class of uncomputable hypotheses s.t. you can asymptotically verify f is in the class, for example the class of all functions h s.t. it is consistent with ZFC that h is the halting function. But the verification would be extremely slow and relatively parsimonious competing hypotheses would remain plausible for an extremely (uncomputably) long time. In any case, notice that the class itself has, in some strong sense, a computable description: specifically, the computable verification procedure itself.”
So, yes, you could theoretically become certain the device is a hypercomputer (although reaching high certainly would take very long time), without knowing precisely which hypercomputer it is, but that doesn’t mean you need to add non-computable hypotheses to your “prior”, since that knowledge would still be expressible as a computable property of the world.
Literal Solomonoff induction (or even bounded versions of Solomonoff induction) is probably not the ultimate “true” model of induction, I was just using it as a simple example before. The true model will allow expressing hypotheses such as “all the even-numbered bits in the sequence are 1”, which involve computable properties of the environment that do not specify it completely. Making this idea precise is somewhat technical.