Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code… if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn’t solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.
If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.
This is incorrect. AIXI is “not computable” only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That’s not what is generally meant by ‘computable’. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions.
Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that’s very different from uncomputability.
If you can’t use Google, see here. They even explain exactly why you are mistaken—because Solomonoff induction is not computable in the first place, so nothing using it can be computable.
Taboo the word computable. (If that’s not enough of a hint, notice that Solomonoff is “incomputable” only for finite computers, whereas this thread is assuming infinite computational resources.)
Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.
In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to “compute” the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.
(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of “memory” by definition, and the amount of time it takes to execute a program is no part of the formalism. So “computable” and “incomputable” in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)
Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.
You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.
You respond now by saying, “Solomonoff induction is not in fact infinite...” Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
Ok this is where the misunderstanding happened. What I said was “if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions.” Truly infinite compute resources will never exist. So that’s not a claim about “we just need better hardware” but rather “if we had magic oracle pixie dust, it’d be easy.”
That’s fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.
Sorry, no, you seem to have completely missed the minimax aspect of the problem—an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.
I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world.
Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.
Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code… if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn’t solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.
“A few short lines of code...”
AIXI is not computable.
If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.
This is incorrect. AIXI is “not computable” only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That’s not what is generally meant by ‘computable’. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions.
Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that’s very different from uncomputability.
Sorry, you are simply mistaken here. Go and read more about it before you say anything else.
Okay random person on the internet.
If you can’t use Google, see here. They even explain exactly why you are mistaken—because Solomonoff induction is not computable in the first place, so nothing using it can be computable.
Taboo the word computable. (If that’s not enough of a hint, notice that Solomonoff is “incomputable” only for finite computers, whereas this thread is assuming infinite computational resources.)
Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.
In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to “compute” the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.
(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of “memory” by definition, and the amount of time it takes to execute a program is no part of the formalism. So “computable” and “incomputable” in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)
Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.
Ok, let’s go back and review this conversation.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.
You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.
You respond now by saying, “Solomonoff induction is not in fact infinite...” Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.
Ok this is where the misunderstanding happened. What I said was “if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions.” Truly infinite compute resources will never exist. So that’s not a claim about “we just need better hardware” but rather “if we had magic oracle pixie dust, it’d be easy.”
The rest I am uninterested in debating further.
That’s fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.
Sorry, no, you seem to have completely missed the minimax aspect of the problem—an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.
I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world.
Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.