A thing already known to computer scientists, but still useful to remember: as per Kleene’s normal form theorem, a universal Turing machine is a primitive recursive function. Meaning that if an angel gives you the encoding of a program you only need recursion, and not unbounded search, to run it.
The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That’s not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that’s not quite the thing that’s usually meant by a universal machine.
I think the fact that you just need one loop is interesting, but it doesn’t go as far as you claim; if an angel gives you a program, you still don’t know how many steps to run it for, so you still need that one unbounded loop.
The standard notion of a UTM takes a representation of a program, and interprets it
Nope. The standard notion of a UTM take the representation of a program and an input, and interprets it. With the caveat that those representations terminate!
What you say, that the number given to the UTM is the number of steps for which the machine must run, is not what is asserted by Kleene’s theorem, which is about functions of natural numbers: the T relation checks, primitive recursively, the encoding of a program and of an input, which is then fed to the universal interpreter. You do not say to a Turing machine for how much steps you need to run, because once a function is defined on an input, it will run and then stop. The fact that some partial recursive function is undefined for some input is accounted by the unbounded search, but this term is not part of the U or the T function. The Kleene equivalence needs, as you say, unbounded search, but if the T checks, it means that x is the encoding of e and n (a program and its input), and that the function will terminate on that input. No need to say for how much steps to run the function.
Indeed, this is true of and evident in any programming language: you give to the interpreter the program and the input, not the number of steps.
See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That’s why T can be primitive recursive.
The T predicate is primitive recursive in the sense that there is a primitive recursive function that, given inputs for the predicate, correctly determine the truth value of the predicate on those inputs.
Also from the same page:
This states there exists a primitive recursive function U such that a function f of one integer
A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive.
And indeed Kleene’s normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.
Yes, but the U() and the T() are primitive recursive. Unbounded search is necessary to get the encoding of the program, but not to execute it, that’s why I said “if an angel gives you the encoding”.
The normal form theorem indeed says that any partial recursive function is equivalent to two primitive recursive functions / relations, namely U and T, and one application of unbounded search.
The ternary relation T1(e,i,x) takes three natural numbers as arguments. The triples of numbers (e,i,x) that belong to the relation (the ones for which T1(e,i,x) is true) are defined to be exactly the triples in which x encodes a computation history of the computable function with index e when run with input i, and the program halts as the last step of this computation history.
In other words: If someone gives you an encoding of a program, an encoding of its input and a trace of its run, you can check with a primitive recursive function whether you have been lied to.
Oh! This point had evaded me: I thought x encoded the program and the input, not just the entire history. So U, instead of executing, just locates the last thing written on tape according to x and repeat it. Well, I’m disappointed… at U and at myself.
Because primitive recursion is quite easy, and so it is quite easy to get a universal Turing machine. Filling that machine with a useful program is another thing entirely, but that’s why we have evolution and programmers...
Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code… if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn’t solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.
If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.
This is incorrect. AIXI is “not computable” only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That’s not what is generally meant by ‘computable’. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions.
Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that’s very different from uncomputability.
If you can’t use Google, see here. They even explain exactly why you are mistaken—because Solomonoff induction is not computable in the first place, so nothing using it can be computable.
Taboo the word computable. (If that’s not enough of a hint, notice that Solomonoff is “incomputable” only for finite computers, whereas this thread is assuming infinite computational resources.)
Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.
In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to “compute” the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.
(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of “memory” by definition, and the amount of time it takes to execute a program is no part of the formalism. So “computable” and “incomputable” in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)
Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.
You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.
You respond now by saying, “Solomonoff induction is not in fact infinite...” Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
Ok this is where the misunderstanding happened. What I said was “if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions.” Truly infinite compute resources will never exist. So that’s not a claim about “we just need better hardware” but rather “if we had magic oracle pixie dust, it’d be easy.”
That’s fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.
Sorry, no, you seem to have completely missed the minimax aspect of the problem—an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.
I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world.
Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.
A thing already known to computer scientists, but still useful to remember: as per Kleene’s normal form theorem, a universal Turing machine is a primitive recursive function.
Meaning that if an angel gives you the encoding of a program you only need recursion, and not unbounded search, to run it.
The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That’s not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that’s not quite the thing that’s usually meant by a universal machine.
I think the fact that you just need one loop is interesting, but it doesn’t go as far as you claim; if an angel gives you a program, you still don’t know how many steps to run it for, so you still need that one unbounded loop.
Nope. The standard notion of a UTM take the representation of a program and an input, and interprets it. With the caveat that those representations terminate!
What you say, that the number given to the UTM is the number of steps for which the machine must run, is not what is asserted by Kleene’s theorem, which is about functions of natural numbers: the T relation checks, primitive recursively, the encoding of a program and of an input, which is then fed to the universal interpreter.
You do not say to a Turing machine for how much steps you need to run, because once a function is defined on an input, it will run and then stop. The fact that some partial recursive function is undefined for some input is accounted by the unbounded search, but this term is not part of the U or the T function.
The Kleene equivalence needs, as you say, unbounded search, but if the T checks, it means that x is the encoding of e and n (a program and its input), and that the function will terminate on that input. No need to say for how much steps to run the function.
Indeed, this is true of and evident in any programming language: you give to the interpreter the program and the input, not the number of steps.
See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That’s why T can be primitive recursive.
From the page you link:
Also from the same page:
A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive.
And indeed Kleene’s normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.
Yes, but the U() and the T() are primitive recursive. Unbounded search is necessary to get the encoding of the program, but not to execute it, that’s why I said “if an angel gives you the encoding”.
The normal form theorem indeed says that any partial recursive function is equivalent to two primitive recursive functions / relations, namely U and T, and one application of unbounded search.
Quoting https://en.wikipedia.org/wiki/Kleene%27s_T_predicate:
In other words: If someone gives you an encoding of a program, an encoding of its input and a trace of its run, you can check with a primitive recursive function whether you have been lied to.
Oh! This point had evaded me: I thought x encoded the program and the input, not just the entire history.
So U, instead of executing, just locates the last thing written on tape according to x and repeat it.
Well, I’m disappointed… at U and at myself.
Why is this useful to remember?
Because primitive recursion is quite easy, and so it is quite easy to get a universal Turing machine. Filling that machine with a useful program is another thing entirely, but that’s why we have evolution and programmers...
Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code… if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn’t solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.
“A few short lines of code...”
AIXI is not computable.
If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.
This is incorrect. AIXI is “not computable” only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That’s not what is generally meant by ‘computable’. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions.
Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that’s very different from uncomputability.
Sorry, you are simply mistaken here. Go and read more about it before you say anything else.
Okay random person on the internet.
If you can’t use Google, see here. They even explain exactly why you are mistaken—because Solomonoff induction is not computable in the first place, so nothing using it can be computable.
Taboo the word computable. (If that’s not enough of a hint, notice that Solomonoff is “incomputable” only for finite computers, whereas this thread is assuming infinite computational resources.)
Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.
In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to “compute” the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.
(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of “memory” by definition, and the amount of time it takes to execute a program is no part of the formalism. So “computable” and “incomputable” in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)
Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.
Ok, let’s go back and review this conversation.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.
You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.
You respond now by saying, “Solomonoff induction is not in fact infinite...” Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.
Ok this is where the misunderstanding happened. What I said was “if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions.” Truly infinite compute resources will never exist. So that’s not a claim about “we just need better hardware” but rather “if we had magic oracle pixie dust, it’d be easy.”
The rest I am uninterested in debating further.
That’s fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.
Sorry, no, you seem to have completely missed the minimax aspect of the problem—an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.
I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world.
Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.