First of all, swapped “induction” to “deduction”, because I’m a moron.
And, the role of the tape in a Turing machine is equivalent to the input, output and working memory of a computer; the Turing machine’s “program” is the finite state machine.
There’s no real semantic distinction between the original contents of the One Tape, or the finite contents of the Input Tape, or an arbitrarily complicated state-machine program, actually. You can build tape data for a Universal Turing Machine to simulate any other Turing machine.
It seems to me that the above suggests that humans are capable of cognitive tasks that Turing machines are not. But if this is true, it follows that human-level AGI is not achievable via a computer, since computers are thought to be equivalent in power to Turing machines.
No. You’re just making the analogy between Turing machines and real, physical computing devices overly strict. A simple Turing machine takes a finite input, has a finite program, and either processes for finite time before accepting (possibly with something on an output tape or something like that), or runs forever.
Real, physical computing devices, both biological and silicon, run coinductive (infinite-loop-until-it-isn’t) programs all the time. Every operating system kernel, or message loop, or game loop is a coinductive program: its chief job is to iterate forever, taking a finite time to process I/O in each step. Each step performs some definite amount of semantic work, but there is an indefinite number of steps (generally: until a special “STOP NOW” input is given). To reiterate: because these programs both loop infinitely and perform I/O (with the I/O data not being computable from anything the program has when it begins running, not being “predictable” in any sense by the program), they are physically-realizable programs that simply don’t match the neat analogy of normal Turing machines.
Likewise, human beings loop infinitely and perform I/O. Which more-or-less gives away half my mathematical sketch of how we solve the problem!
To reiterate: because these programs both loop infinitely and perform I/O (with the I/O data not being computable from anything the program has when it begins running, not being “predictable” in any sense by the program), they are physically-realizable programs that simply don’t match the neat analogy of normal Turing machines.
They match the neat analogy of Turing machines that start with possibly infinitely many non-blank squares on their tapes. All the obvious things you might like to know are still undecidable. Is this machine guaranteed to eventually read every item of its input? Is there any input that will drive the machine into a certain one of its states? Will the machine ever write a given finite string? Will it eventually have written every finite prefix of a given infinite string? Will it ever halt? These are all straightforwardly reducible to the standard halting problem.
Or perhaps you would add to the model an environment that responds to what the machine does. In that case, represent the environment by an oracle (not necessarily a computable one), and define some turn-taking mechanism for the machine to ask questions and receive answers. All of the interesting questions remain undecidable.
There’s no real semantic distinction between the original contents of the One Tape, or the finite contents of the Input Tape, or an arbitrarily complicated state-machine program, actually. You can build tape data for a Universal Turing Machine to simulate any other Turing machine.
and with this:
Real, physical computing devices, both biological and silicon, run coinductive (infinite-loop-until-it-isn’t) programs all the time. Every operating system kernel, or message loop, or game loop is a coinductive program: its chief job is to iterate forever, taking a finite time to process I/O in each step. Each step performs some definite amount of semantic work, but there is an indefinite number of steps (generally: until a special “STOP NOW” input is given).
However, I don’t see how you are going to be able to use these facts to solve the halting problem. I’m guessing that, rather than statically examining the Turing machine, you will execute it for some period of time and study the execution, and after some finite amount of time, you expect to be able to correctly state whether or not the machine will halt. Is that the basic idea?
I’m guessing that, rather than statically examining the Turing machine, you will execute it for some period of time and study the execution, and after some finite amount of time, you expect to be able to correctly state whether or not the machine will halt. Is that the basic idea?
Basically, yes. “Halting Empiricism”, you could call it. The issue is precisely that you can’t do empirical reasoning via deduction from a fixed axiom set (ie: a fixed, finite program halts x : program -> boolean). You need to do it by inductive reasoning, instead.
My reply to you is the same as my reply to g_pepper: it’s easier for me to just do my background research, double-check everything, and eventually publish the full formalism than it is to explain all the details in a blog comment.
You are also correct to note that whatever combination of machine, person, input tape, and empirical data I provide, the X Machine can never solve the Halting Problem for the X Machine. The real math involved in my thinking here involves demonstrating the existence of an ordering: there should exist a sense in which some machines are “smaller” than others, and A can solve B’s halting problem when A is “strictly larger” than B, possibly strictly larger by some necessary amount.
(Of course, this already holds if A has an nth-level Turing Oracle, B has an mth-level Turing Oracle, and n > m, but that’s trivial from the definition of an oracle. I’m thinking of something that actually concerns physically realizable machines.)
Like I said: trying to go into extensive detail via blog comment will do nothing but confusingly unpack my intuitions about the problem, increasing net confusion. The proper thing to do is formalize, and that’s going to take a bunch of time.
This is interesting. Do you have an outline of how a person would go about determining whether a Turing machine will halt? If so, I would be interested in seeing it. Alternatively, if you have a more detailed argument as to why a person will be able to determine whether an arbitrary Turing machine will halt, even if that argument does not contain the details of how the person would proceed, I would be interested in seeing that.
Or, are you just making the argument that an intelligent person ought to be able in every case to use some combination of inductive reasoning, creativity, mathematical intuition, etc., to correctly determine whether or not a Turing machine will halt?
My reply to you is the same as my reply to Richard Kennaway: it’s easier for me to just do my background research, double-check everything, and eventually publish the full formalism than it is to explain all the details in a blog comment.
First of all, swapped “induction” to “deduction”, because I’m a moron.
There’s no real semantic distinction between the original contents of the One Tape, or the finite contents of the Input Tape, or an arbitrarily complicated state-machine program, actually. You can build tape data for a Universal Turing Machine to simulate any other Turing machine.
No. You’re just making the analogy between Turing machines and real, physical computing devices overly strict. A simple Turing machine takes a finite input, has a finite program, and either processes for finite time before accepting (possibly with something on an output tape or something like that), or runs forever.
Real, physical computing devices, both biological and silicon, run coinductive (infinite-loop-until-it-isn’t) programs all the time. Every operating system kernel, or message loop, or game loop is a coinductive program: its chief job is to iterate forever, taking a finite time to process I/O in each step. Each step performs some definite amount of semantic work, but there is an indefinite number of steps (generally: until a special “STOP NOW” input is given). To reiterate: because these programs both loop infinitely and perform I/O (with the I/O data not being computable from anything the program has when it begins running, not being “predictable” in any sense by the program), they are physically-realizable programs that simply don’t match the neat analogy of normal Turing machines.
Likewise, human beings loop infinitely and perform I/O. Which more-or-less gives away half my mathematical sketch of how we solve the problem!
They match the neat analogy of Turing machines that start with possibly infinitely many non-blank squares on their tapes. All the obvious things you might like to know are still undecidable. Is this machine guaranteed to eventually read every item of its input? Is there any input that will drive the machine into a certain one of its states? Will the machine ever write a given finite string? Will it eventually have written every finite prefix of a given infinite string? Will it ever halt? These are all straightforwardly reducible to the standard halting problem.
Or perhaps you would add to the model an environment that responds to what the machine does. In that case, represent the environment by an oracle (not necessarily a computable one), and define some turn-taking mechanism for the machine to ask questions and receive answers. All of the interesting questions remain undecidable.
I agree with this:
and with this:
However, I don’t see how you are going to be able to use these facts to solve the halting problem. I’m guessing that, rather than statically examining the Turing machine, you will execute it for some period of time and study the execution, and after some finite amount of time, you expect to be able to correctly state whether or not the machine will halt. Is that the basic idea?
Basically, yes. “Halting Empiricism”, you could call it. The issue is precisely that you can’t do empirical reasoning via deduction from a fixed axiom set (ie: a fixed, finite program
halts x : program -> boolean
). You need to do it by inductive reasoning, instead.What is inductive reasoning? Bayesian updating? Or something else?
My reply to you is the same as my reply to g_pepper: it’s easier for me to just do my background research, double-check everything, and eventually publish the full formalism than it is to explain all the details in a blog comment.
You are also correct to note that whatever combination of machine, person, input tape, and empirical data I provide, the X Machine can never solve the Halting Problem for the X Machine. The real math involved in my thinking here involves demonstrating the existence of an ordering: there should exist a sense in which some machines are “smaller” than others, and A can solve B’s halting problem when A is “strictly larger” than B, possibly strictly larger by some necessary amount.
(Of course, this already holds if A has an nth-level Turing Oracle, B has an mth-level Turing Oracle, and n > m, but that’s trivial from the definition of an oracle. I’m thinking of something that actually concerns physically realizable machines.)
Like I said: trying to go into extensive detail via blog comment will do nothing but confusingly unpack my intuitions about the problem, increasing net confusion. The proper thing to do is formalize, and that’s going to take a bunch of time.
This is interesting. Do you have an outline of how a person would go about determining whether a Turing machine will halt? If so, I would be interested in seeing it. Alternatively, if you have a more detailed argument as to why a person will be able to determine whether an arbitrary Turing machine will halt, even if that argument does not contain the details of how the person would proceed, I would be interested in seeing that.
Or, are you just making the argument that an intelligent person ought to be able in every case to use some combination of inductive reasoning, creativity, mathematical intuition, etc., to correctly determine whether or not a Turing machine will halt?
My reply to you is the same as my reply to Richard Kennaway: it’s easier for me to just do my background research, double-check everything, and eventually publish the full formalism than it is to explain all the details in a blog comment.
Fair enough. I am looking forward to seeing the formalism!