There is no adult master pianist whose ability to learn new pieces is orders of magnitude better than that of a 12-year old prodigy (i.e., the same master pianist when they were 12 years old). The primary difference between them is not their ability to learn, but how much they have learned—i.e., pianistic technique, non-pianistic skills related to general musicianship, musical interpretation and style, etc.
Recursive self-improvement isn’t completely well defined and I was only making the point that the learning process for humans involves some element of recursive self improvement. The piano example at this point is no longer entirely useful, because we are just picking and choosing any kind of more specific example to suit our personal opinions. For example, I could reply that you are wrong to contrast the child prodigy with the master pianist, because that confuses the intended comparison between a pianist and a non-pianist. The point of the example is that any experienced pianist can learn new pieces far, far faster than a noob. Since learning new pieces amounts to more knowledge and more experience, more technique, poise, and so on, this process equates to self-improvement. Thus, the experienced pianist has definitely achieved a level of meta-improvement, or improving his ability to improve. However, you could reply that the experienced pianist no longer continues his meta-learning process, (as compared to the prodigy), so therefore the sense of recursive self-improvement has been irrepairably weakened and no longer retains the same level of significance as we are trying to attach to the term. In other words, you might claim that humans don’t have the required sense of longevity to their recursive self improvement. In any case, let’s return to the main point.
The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
I realize that there is somewhat of a problem with my original question in that it is too large in scope, perhaps too fundamental for this kind of small, bullet point type of Q&A. Still, it would be nice if people could maybe give more references or something more serious in order to educate me.
The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
There are many reasons, but here are just a few that should be sufficient: it’s much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it’s purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn’t mean that it’s any smarter. Hence it can’t all of a sudden use this technique to RSI “foom”.
Third, making copies of itself is a non-trivial activity with which amounts to self-simulating itself, which amounts to an exponential reduction in its processing power available. I don’t see the GAI being able to make copies of itself much easier than say, two humans …reproducing… and waiting 9 months to get a baby.
it’s conjecture, yes, but not pure conjecture. Natural selection doesn’t optimize, it satisfices, and the slow process of accreting new features and repurposing existing systems for alternative uses ensures that there’s lots of redundancy, with lots of room for simplification and improvement. When has the artificial solution ever been as complex as the naturally evolved alternative it replaced, and why should the human brain be any different?
Intelligence tests are timed for a reason, and that’s because speed is one aspect of intelligence. If the program is smart enough (which it is by hypothesis) that it will eventually comes across the right theory, consider the right hypothesis, develop the appropriate mathematics, etc., at some point (just as we might argue the smartest human beings are), then more processing power results in that happening much faster, since the many dead ends can be reached faster, and the alternatives explored more quickly.
Making a copy of itself requires a handful of machine instructions, and sending that copy to a new processing node with instructions on what hypotheses to investigate is a few more instructions. I feel like I’m being trolled here, with the suggestion that copying a big number in computer memory from one location to another can’t be done any more easily than creating a human baby (and don’t forget educating it for 20 years).
A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
And yet its source code is much more comprehensible (and, crucially, much more maintainable) than the DNA of even a very simple single-celled organism.
Re: Why should we expect a formal computer that recursively self improves to reach some greater heights?
Google has already self-improved to much greater heights. Evolution apparently favours intelligence—and Google has substantial resources, isn’t constrained by the human birth canal, and can be easily engineered.
Learning things can itself help improve your ability to learn new things. The classic example of this is language—but much the same applies to musical skills.
What do “orders of magnitude” have to do with the issue? Surely that’s the concept of “self-improvement by orders of magnitude” instead.
Also, on what scale are you measuring? Adult master pianists can probably learn pieces in days which a 12-year old would literally take years to master the skills to be able to perform—so I am sceptical about the “orders of magnitude” claim.
The measure I had in mind was how long it takes to learn a new piece from scratch so that you can perform it to the absolute best of your current abilities. It’s true that the abilities themselves continue to increase past age 12, which for the moment may preclude certain things that are beyond the current ability level, but the point is that the rate of learning of everything the 12-year old has the technique for is not radically different than that of the adult. There are no quantum leaps in rate of learning, as would be expected if we were dealing with recursive self-improvement that iterated many times.
Humans certainly have their limits. However, computers can learn music in microseconds—and their abilities to learn rapidly are growing ever faster.
I think to argue that there is not yet recursive self-improvement going on, then at the very least, you have to stick to those things that the “machine” part of the man-machine symbiosis can’t yet contribute towards.
Of course, that does NOT include important things like designing computers, making CPUs, or computer programming.
There is no adult master pianist whose ability to learn new pieces is orders of magnitude better than that of a 12-year old prodigy (i.e., the same master pianist when they were 12 years old). The primary difference between them is not their ability to learn, but how much they have learned—i.e., pianistic technique, non-pianistic skills related to general musicianship, musical interpretation and style, etc.
Recursive self-improvement isn’t completely well defined and I was only making the point that the learning process for humans involves some element of recursive self improvement. The piano example at this point is no longer entirely useful, because we are just picking and choosing any kind of more specific example to suit our personal opinions. For example, I could reply that you are wrong to contrast the child prodigy with the master pianist, because that confuses the intended comparison between a pianist and a non-pianist. The point of the example is that any experienced pianist can learn new pieces far, far faster than a noob. Since learning new pieces amounts to more knowledge and more experience, more technique, poise, and so on, this process equates to self-improvement. Thus, the experienced pianist has definitely achieved a level of meta-improvement, or improving his ability to improve. However, you could reply that the experienced pianist no longer continues his meta-learning process, (as compared to the prodigy), so therefore the sense of recursive self-improvement has been irrepairably weakened and no longer retains the same level of significance as we are trying to attach to the term. In other words, you might claim that humans don’t have the required sense of longevity to their recursive self improvement. In any case, let’s return to the main point.
The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
I realize that there is somewhat of a problem with my original question in that it is too large in scope, perhaps too fundamental for this kind of small, bullet point type of Q&A. Still, it would be nice if people could maybe give more references or something more serious in order to educate me.
There are many reasons, but here are just a few that should be sufficient: it’s much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it’s purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn’t mean that it’s any smarter. Hence it can’t all of a sudden use this technique to RSI “foom”.
Third, making copies of itself is a non-trivial activity with which amounts to self-simulating itself, which amounts to an exponential reduction in its processing power available. I don’t see the GAI being able to make copies of itself much easier than say, two humans …reproducing… and waiting 9 months to get a baby.
it’s conjecture, yes, but not pure conjecture. Natural selection doesn’t optimize, it satisfices, and the slow process of accreting new features and repurposing existing systems for alternative uses ensures that there’s lots of redundancy, with lots of room for simplification and improvement. When has the artificial solution ever been as complex as the naturally evolved alternative it replaced, and why should the human brain be any different?
Intelligence tests are timed for a reason, and that’s because speed is one aspect of intelligence. If the program is smart enough (which it is by hypothesis) that it will eventually comes across the right theory, consider the right hypothesis, develop the appropriate mathematics, etc., at some point (just as we might argue the smartest human beings are), then more processing power results in that happening much faster, since the many dead ends can be reached faster, and the alternatives explored more quickly.
Making a copy of itself requires a handful of machine instructions, and sending that copy to a new processing node with instructions on what hypotheses to investigate is a few more instructions. I feel like I’m being trolled here, with the suggestion that copying a big number in computer memory from one location to another can’t be done any more easily than creating a human baby (and don’t forget educating it for 20 years).
And yet its source code is much more comprehensible (and, crucially, much more maintainable) than the DNA of even a very simple single-celled organism.
Re: Why should we expect a formal computer that recursively self improves to reach some greater heights?
Google has already self-improved to much greater heights. Evolution apparently favours intelligence—and Google has substantial resources, isn’t constrained by the human birth canal, and can be easily engineered.
Learning things can itself help improve your ability to learn new things. The classic example of this is language—but much the same applies to musical skills.
What do “orders of magnitude” have to do with the issue? Surely that’s the concept of “self-improvement by orders of magnitude” instead.
Also, on what scale are you measuring? Adult master pianists can probably learn pieces in days which a 12-year old would literally take years to master the skills to be able to perform—so I am sceptical about the “orders of magnitude” claim.
The measure I had in mind was how long it takes to learn a new piece from scratch so that you can perform it to the absolute best of your current abilities. It’s true that the abilities themselves continue to increase past age 12, which for the moment may preclude certain things that are beyond the current ability level, but the point is that the rate of learning of everything the 12-year old has the technique for is not radically different than that of the adult. There are no quantum leaps in rate of learning, as would be expected if we were dealing with recursive self-improvement that iterated many times.
Humans certainly have their limits. However, computers can learn music in microseconds—and their abilities to learn rapidly are growing ever faster.
I think to argue that there is not yet recursive self-improvement going on, then at the very least, you have to stick to those things that the “machine” part of the man-machine symbiosis can’t yet contribute towards.
Of course, that does NOT include important things like designing computers, making CPUs, or computer programming.