Like I said before—a human is indistinguishable (at some level) from a GAI, and yet its intelligence stops increasing somewhere, principally, somewhere that’s exactly within the human range. Inductive reasoning on this case implies that a true (computerized) GAI would face a similar barrier to its self-improvement function.
Well, there is a human self-improvement industry—and humans do have access to some of their mental programming—the parts that can be accessed by education.
Companies have much greater self-improvement potential than humans, though.
I disagree. Humans don’t recursively self-improve? What about a master pianist? Do they just start out a master pianist or do they gradually improve their technique until they reach their mastery? Humans show extreme capability in the action of learning, and this is precisely analogous to what you call “recursive self-improvement”.
Of course the source code isn’t known now and I left that open in my statement. But even if the source code were completely understood, as I said before, this does not imply we would understand (or could even figure out) how to make it fundamentally better. For example—given the Windows OS code, could you fundamentally improve it? (I’m sure this is a terrible example, so if you hate Windows, substitute in your favorite OS program instead.) It’s not clear that you could, even given the code.
To be more precise, recursive self-improvement in humans (like learning how to learn more effectively) is limited to small improvements and few recursions. It is of a fundamentally different nature than recursive self-improvement would be for an agent that had access to and understood its source code and was able to recursively self-improve at the source code level for a large number of recursions in relatively quick succession. The analogous kind of recursive self-improvement in humans would be if it were possible to improve your intelligence to a significant degree (say 1 standard deviation) and then use that intelligence to find even better ways of increasing your intelligence (e.g., if you were already a scientist studying means of increasing human intelligence), and if this recursion happened quickly enough that you could recurse many times.
Your pianist example fails for the following reasons. Pianists spend most of their time “improving their piano (and general musicianship) skills”. They don’t spend most of their time improving “their ability to improve their piano skills”, and there is no tight feedback loop of “improved piano skills” leading to even greater ability to “improve piano skills” leading to still greater ability to “improve piano skills”. The outcome of practicing and improving piano skills is almost entirely “improved piano skills”, and not very much “improved ability to improve piano skills”. In the early days of study, they might actually focus on meta-techniques related to effective piano study, but in that case, note again that it’s not genuinely recursive because “studying meta-techniques related to learning the piano” doesn’t improve their ability to “study meta-techniques related to learning the piano”.
You make a number of assumptions here and you also ignore my previous comments regarding the following point: you assume that knowledge of one’s source code permits a fundamentally more powerful kind of recursive self improvement. This is a crucial assumption on which your argument rests… if this assumption is false (and it is certainly insubstantiated) then we have no reason to believe that a GAI can do any more than what a human can do, given full knowledge of the brain. And as we know, there are some serious limitations on what we can do with the brain. Thus the concept of recursive self improvement leading to super-human intelligence is equated to (essentially) the problem of drug and surgical treatment and expansion, which has a rightfully limiting sounding ring to it.
Furthermore, your assumptions consist in (for example) the idea that such a thing as the agent you describe can possibly exist. It is all well and good in theory to talk abstractly about a system (e.g., a human) improving its intelligence to improve its intelligence, but you seem to draw a kind of arbitrary distinction between this process and the more common processes involves in human activities, like piano playing.
In the piano example, you are just incorrect to claim there is no recursive self improvement (RSI, now) going on there. Just consider the following ideas.
Specifically, you point out that pianists don’t actively seek to recursively self-improve, which is true, as it would be hopelessly convoluted and they would never learn how to actually play anything. However, you neglect to consider the passive action of recursive self-improvement which takes part in the process. This action is clear from the simple observation that an experienced pianist can learn (i.e., sight read a new piece and play it) much better than a beginner pianist. Since learning pieces is exactly what makes you a better pianist, this is an empirical evidence of recursive self-improvement. It is besides the point that this may not be the same degree as your idealized RSI, which is an arbitrary, impractical, and undemonstrated mode, as pointed out above. It is also besides the point that the pianist doesn’t actively seek out this technique. (Even if a GAI didn’t actively seek out to RSI its “intelligence”, but it still did, we would achieve the same end results. )
And as we know, there are some serious limitations on what we can do with the brain.
Yes, but imagine not only that we have complete access to the brain’s source code, but that the brain is digitally implemented and any component can be changed at whim. What could we achieve? At the very least, some very helpful things, if not superintelligence:
We already have examples of drugs and diseases that boost cognitive performance. Personally, I’ve been hyperthyroid before. The cognitive boost at the peak was very pronounced. This can’t be sustained in wetware (at the moment) for various reasons. None of those reasons, as far as I’ve seen, would matter in silicon. Sustainable hyperthyroidism via alterations to my ‘source code’ alone would make me 5 times more productive.
Once a mechanism of action is understood, it’s likely it can be increased, at least a little. For instance, nootropes (such as piracetam, huperzine a, modafinil) work via chemical pathways. It seems reasonable to expect that bypassing the chemical aspect and directly tweaking the code should provide better results. If nothing else the quality and quantity of the dose can be regularized and optimized much more efficiently. This isn’t even mentioning all the drugs that can’t cross the blood-brain barrier but which could be directly ‘injected’ into individual neurons in a simulation, which is a tiny subset of all the ‘drugs’ that could be tried from directly changing the way the neuron works. Many nootropes, too, either diminish in effect over time (for chemical reasons) or tax the body in unsupportable ways, as with hyperthyroidism: neither of these would pose a problem for a silicon brain, which could be permanently pumped up on a whole cocktail of crazy drugs and mood modifiers without worrying about the damage being done to the endocrine system or any other fragile wetware.
In short, an uploaded human with access to its source code and an understanding of neurology and biochemistry, while probably falling short of superintelligence, would have a hell of an advantage over meatspace humans, even without hardware acceleration.
you assume that knowledge of one’s source code permits a fundamentally more powerful kind of recursive self improvement.
It’s not really a difference in kind so much as a radical difference in terms of efficiency.
If asked to improve a C program, do you think a C programmer would rather have a memory dump of the running program or the memory dump and the source code for the program? The source code is a huge help in understanding and improving the program, and this translates into an ability to make improvements at a rate that is orders of magnitude greater with the source code than without. There’s no reason to expect the case to be different for programs that are AGIs than for other kinds of programs, and no reason to expect it to be different for programmers that are AGIs than for human programmers. On the contrary, I think the advantage of having and understanding the source code increases as programs get larger and more complex, and is greater for programs that were artificially designed and are modular and highly compressed versus naturally evolved programs that have lots of redundancy and are non-modular.
Several posts in this thread seem to be confusing recursive self-improvement with merely iterative self-improvement.
If a human pianist practices to get better, and then practices some more to get even better, and then practices some more to get even better than that, then that is ISI: essentially linear growth.
RSI in humans would have to involve things like rationality training and “learning how to learn”: getting better at getting better.
ISI does not go foom. RSI can. (A human RSI foom would involve neurosurgery and transhumanism.)
Thanks for trying to clear that up but again, you’re not understanding the piano example. I’m not going to repeat it again as that would just be redundant, but if you read carefully in the example, you see that there is an empirical evidence of recursive self improvement. This isn’t a matter of confusion.
The pianist may seem like they are just practicing to get better, practing some more to get even better, as you say. However, if you look at the final product (the highly experienced pianist) he isn’t just better at playing—he is also much better at learning to play better. This is RSI, even though his intentions (as you correctly say) may not be explicitly set up to achieve RSI.
I reread it (here, right?) and I don’t see anything about recursion.
Yes, a master pianist can learn a new piece faster than a novice can, but this is merely… let’s call it concentric self-improvement. The master is (0) good at playing piano, (1) good at learning to do 0, (2) good at learning to do 1, etc., for finitely many levels in a strict, non-tangled hierarchy.
This is fundamentally different-in-kind from being (0) good at playing piano, and (1) good at learning to do 0 and 1. ISI grows linearly, CSI grows polynomially (of potentially very large degree), and RSI grows superexponentially.
We self improve. Recursive self improvement means improving our means of self improvement. Reading about tactics for efficient self improvement would be recursive self improvement. Altering your neurology to gain eidetic memory so you can remember all the answers to the test, while using your neurology to figure out how to do it, would be recursive self improvement (of the kind we don’t have). Then, with the comprehension problems that eidetics get you keep on altering your neurology.
As you say, humans can certainly improve their means of self improvement. They do that by things like learning lanugages, learning thinking tools, and inventing new thinking tools that are then passed down the generations.
IMO, those who want “recursive self improvement” to refer to something that doesn’t yet exist must tie themselves in knots with counter-intuitive and non-literal conceptions of what that phrase means.
I agree with you and it’s a good distinction to make. But I think that it is not trivial to just divide out how much of what we do is self-improvement, and how much is recursive self-improvement. As you say, it is definitely possible for us to do recursive self-improvement (meta-learning, or however you want to call it.) That’s really all that needs to be said to reiterate that humans stand as a sort of natural GAI.
I do think, as with the case of a master pianist, or all sorts of trades, that our ability to learn increases with our actual understanding. So the master pianist has not just self-improved, but he has also gained the ability to play highly technical pieces on sight—which represents the action of recursive self-improvement. He has learned how to learn pieces much easier, faster, and better.
There is no adult master pianist whose ability to learn new pieces is orders of magnitude better than that of a 12-year old prodigy (i.e., the same master pianist when they were 12 years old). The primary difference between them is not their ability to learn, but how much they have learned—i.e., pianistic technique, non-pianistic skills related to general musicianship, musical interpretation and style, etc.
Recursive self-improvement isn’t completely well defined and I was only making the point that the learning process for humans involves some element of recursive self improvement. The piano example at this point is no longer entirely useful, because we are just picking and choosing any kind of more specific example to suit our personal opinions. For example, I could reply that you are wrong to contrast the child prodigy with the master pianist, because that confuses the intended comparison between a pianist and a non-pianist. The point of the example is that any experienced pianist can learn new pieces far, far faster than a noob. Since learning new pieces amounts to more knowledge and more experience, more technique, poise, and so on, this process equates to self-improvement. Thus, the experienced pianist has definitely achieved a level of meta-improvement, or improving his ability to improve. However, you could reply that the experienced pianist no longer continues his meta-learning process, (as compared to the prodigy), so therefore the sense of recursive self-improvement has been irrepairably weakened and no longer retains the same level of significance as we are trying to attach to the term. In other words, you might claim that humans don’t have the required sense of longevity to their recursive self improvement. In any case, let’s return to the main point.
The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
I realize that there is somewhat of a problem with my original question in that it is too large in scope, perhaps too fundamental for this kind of small, bullet point type of Q&A. Still, it would be nice if people could maybe give more references or something more serious in order to educate me.
The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
There are many reasons, but here are just a few that should be sufficient: it’s much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it’s purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn’t mean that it’s any smarter. Hence it can’t all of a sudden use this technique to RSI “foom”.
Third, making copies of itself is a non-trivial activity with which amounts to self-simulating itself, which amounts to an exponential reduction in its processing power available. I don’t see the GAI being able to make copies of itself much easier than say, two humans …reproducing… and waiting 9 months to get a baby.
it’s conjecture, yes, but not pure conjecture. Natural selection doesn’t optimize, it satisfices, and the slow process of accreting new features and repurposing existing systems for alternative uses ensures that there’s lots of redundancy, with lots of room for simplification and improvement. When has the artificial solution ever been as complex as the naturally evolved alternative it replaced, and why should the human brain be any different?
Intelligence tests are timed for a reason, and that’s because speed is one aspect of intelligence. If the program is smart enough (which it is by hypothesis) that it will eventually comes across the right theory, consider the right hypothesis, develop the appropriate mathematics, etc., at some point (just as we might argue the smartest human beings are), then more processing power results in that happening much faster, since the many dead ends can be reached faster, and the alternatives explored more quickly.
Making a copy of itself requires a handful of machine instructions, and sending that copy to a new processing node with instructions on what hypotheses to investigate is a few more instructions. I feel like I’m being trolled here, with the suggestion that copying a big number in computer memory from one location to another can’t be done any more easily than creating a human baby (and don’t forget educating it for 20 years).
A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
And yet its source code is much more comprehensible (and, crucially, much more maintainable) than the DNA of even a very simple single-celled organism.
Re: Why should we expect a formal computer that recursively self improves to reach some greater heights?
Google has already self-improved to much greater heights. Evolution apparently favours intelligence—and Google has substantial resources, isn’t constrained by the human birth canal, and can be easily engineered.
Learning things can itself help improve your ability to learn new things. The classic example of this is language—but much the same applies to musical skills.
What do “orders of magnitude” have to do with the issue? Surely that’s the concept of “self-improvement by orders of magnitude” instead.
Also, on what scale are you measuring? Adult master pianists can probably learn pieces in days which a 12-year old would literally take years to master the skills to be able to perform—so I am sceptical about the “orders of magnitude” claim.
The measure I had in mind was how long it takes to learn a new piece from scratch so that you can perform it to the absolute best of your current abilities. It’s true that the abilities themselves continue to increase past age 12, which for the moment may preclude certain things that are beyond the current ability level, but the point is that the rate of learning of everything the 12-year old has the technique for is not radically different than that of the adult. There are no quantum leaps in rate of learning, as would be expected if we were dealing with recursive self-improvement that iterated many times.
Humans certainly have their limits. However, computers can learn music in microseconds—and their abilities to learn rapidly are growing ever faster.
I think to argue that there is not yet recursive self-improvement going on, then at the very least, you have to stick to those things that the “machine” part of the man-machine symbiosis can’t yet contribute towards.
Of course, that does NOT include important things like designing computers, making CPUs, or computer programming.
Irrelevant. Timescale of human evolution is far, far longer than the projections for AI development. For all intensive purposes, it has stopped.
Furthermore, even though evolution may not have stopped, I think it is obvious that it has changed (or selection pressures) changed so much that its modern implications are unclear.
It appears you are trying to shift the emphasis from the argument itself to the particular semantics of how things are being said. This is undesireable. I am speaking about the irrelevancy of his argument, not the irrelevancy of his statement. His statement is clearly relevant. To rehash—Despite the fact that evolution is still going—on a suitably local (say, 100,000 years) timescale—humans have reached a major plateau in intelligence.
It appears you are trying to shift the emphasis from the argument itself to the particular semantics of how things are being said
Quite the reverse.
humans have reached a major plateau in intelligence.
No they haven’t. Whatever effect the the currently volatile evolutionary pressures may have on human intelligence a ‘major plateau’ would be incredible.
The reason we care about whether evolution has stopped is that we care how significant the level of current human intelligence is. So yes, the current level is very significant in that it can be determined given only what century it is; that doesn’t mean it’s significant in that it’s likely that self-improving artificial intelligence will hit a snag there.
Like I said before—a human is indistinguishable (at some level) from a GAI, and yet its intelligence stops increasing somewhere, principally, somewhere that’s exactly within the human range. Inductive reasoning on this case implies that a true (computerized) GAI would face a similar barrier to its self-improvement function.
Humans don’t recursively self-improve and they don’t have access to their source code (yet).
Well, there is a human self-improvement industry—and humans do have access to some of their mental programming—the parts that can be accessed by education.
Companies have much greater self-improvement potential than humans, though.
I disagree. Humans don’t recursively self-improve? What about a master pianist? Do they just start out a master pianist or do they gradually improve their technique until they reach their mastery? Humans show extreme capability in the action of learning, and this is precisely analogous to what you call “recursive self-improvement”.
Of course the source code isn’t known now and I left that open in my statement. But even if the source code were completely understood, as I said before, this does not imply we would understand (or could even figure out) how to make it fundamentally better. For example—given the Windows OS code, could you fundamentally improve it? (I’m sure this is a terrible example, so if you hate Windows, substitute in your favorite OS program instead.) It’s not clear that you could, even given the code.
To be more precise, recursive self-improvement in humans (like learning how to learn more effectively) is limited to small improvements and few recursions. It is of a fundamentally different nature than recursive self-improvement would be for an agent that had access to and understood its source code and was able to recursively self-improve at the source code level for a large number of recursions in relatively quick succession. The analogous kind of recursive self-improvement in humans would be if it were possible to improve your intelligence to a significant degree (say 1 standard deviation) and then use that intelligence to find even better ways of increasing your intelligence (e.g., if you were already a scientist studying means of increasing human intelligence), and if this recursion happened quickly enough that you could recurse many times.
Your pianist example fails for the following reasons. Pianists spend most of their time “improving their piano (and general musicianship) skills”. They don’t spend most of their time improving “their ability to improve their piano skills”, and there is no tight feedback loop of “improved piano skills” leading to even greater ability to “improve piano skills” leading to still greater ability to “improve piano skills”. The outcome of practicing and improving piano skills is almost entirely “improved piano skills”, and not very much “improved ability to improve piano skills”. In the early days of study, they might actually focus on meta-techniques related to effective piano study, but in that case, note again that it’s not genuinely recursive because “studying meta-techniques related to learning the piano” doesn’t improve their ability to “study meta-techniques related to learning the piano”.
You make a number of assumptions here and you also ignore my previous comments regarding the following point: you assume that knowledge of one’s source code permits a fundamentally more powerful kind of recursive self improvement. This is a crucial assumption on which your argument rests… if this assumption is false (and it is certainly insubstantiated) then we have no reason to believe that a GAI can do any more than what a human can do, given full knowledge of the brain. And as we know, there are some serious limitations on what we can do with the brain. Thus the concept of recursive self improvement leading to super-human intelligence is equated to (essentially) the problem of drug and surgical treatment and expansion, which has a rightfully limiting sounding ring to it.
Furthermore, your assumptions consist in (for example) the idea that such a thing as the agent you describe can possibly exist. It is all well and good in theory to talk abstractly about a system (e.g., a human) improving its intelligence to improve its intelligence, but you seem to draw a kind of arbitrary distinction between this process and the more common processes involves in human activities, like piano playing.
In the piano example, you are just incorrect to claim there is no recursive self improvement (RSI, now) going on there. Just consider the following ideas.
Specifically, you point out that pianists don’t actively seek to recursively self-improve, which is true, as it would be hopelessly convoluted and they would never learn how to actually play anything. However, you neglect to consider the passive action of recursive self-improvement which takes part in the process. This action is clear from the simple observation that an experienced pianist can learn (i.e., sight read a new piece and play it) much better than a beginner pianist. Since learning pieces is exactly what makes you a better pianist, this is an empirical evidence of recursive self-improvement. It is besides the point that this may not be the same degree as your idealized RSI, which is an arbitrary, impractical, and undemonstrated mode, as pointed out above. It is also besides the point that the pianist doesn’t actively seek out this technique. (Even if a GAI didn’t actively seek out to RSI its “intelligence”, but it still did, we would achieve the same end results. )
Yes, but imagine not only that we have complete access to the brain’s source code, but that the brain is digitally implemented and any component can be changed at whim. What could we achieve? At the very least, some very helpful things, if not superintelligence:
We already have examples of drugs and diseases that boost cognitive performance. Personally, I’ve been hyperthyroid before. The cognitive boost at the peak was very pronounced. This can’t be sustained in wetware (at the moment) for various reasons. None of those reasons, as far as I’ve seen, would matter in silicon. Sustainable hyperthyroidism via alterations to my ‘source code’ alone would make me 5 times more productive.
Once a mechanism of action is understood, it’s likely it can be increased, at least a little. For instance, nootropes (such as piracetam, huperzine a, modafinil) work via chemical pathways. It seems reasonable to expect that bypassing the chemical aspect and directly tweaking the code should provide better results. If nothing else the quality and quantity of the dose can be regularized and optimized much more efficiently. This isn’t even mentioning all the drugs that can’t cross the blood-brain barrier but which could be directly ‘injected’ into individual neurons in a simulation, which is a tiny subset of all the ‘drugs’ that could be tried from directly changing the way the neuron works. Many nootropes, too, either diminish in effect over time (for chemical reasons) or tax the body in unsupportable ways, as with hyperthyroidism: neither of these would pose a problem for a silicon brain, which could be permanently pumped up on a whole cocktail of crazy drugs and mood modifiers without worrying about the damage being done to the endocrine system or any other fragile wetware.
In short, an uploaded human with access to its source code and an understanding of neurology and biochemistry, while probably falling short of superintelligence, would have a hell of an advantage over meatspace humans, even without hardware acceleration.
It’s not really a difference in kind so much as a radical difference in terms of efficiency.
If asked to improve a C program, do you think a C programmer would rather have a memory dump of the running program or the memory dump and the source code for the program? The source code is a huge help in understanding and improving the program, and this translates into an ability to make improvements at a rate that is orders of magnitude greater with the source code than without. There’s no reason to expect the case to be different for programs that are AGIs than for other kinds of programs, and no reason to expect it to be different for programmers that are AGIs than for human programmers. On the contrary, I think the advantage of having and understanding the source code increases as programs get larger and more complex, and is greater for programs that were artificially designed and are modular and highly compressed versus naturally evolved programs that have lots of redundancy and are non-modular.
Several posts in this thread seem to be confusing recursive self-improvement with merely iterative self-improvement.
If a human pianist practices to get better, and then practices some more to get even better, and then practices some more to get even better than that, then that is ISI: essentially linear growth.
RSI in humans would have to involve things like rationality training and “learning how to learn”: getting better at getting better.
ISI does not go foom. RSI can. (A human RSI foom would involve neurosurgery and transhumanism.)
Thanks for trying to clear that up but again, you’re not understanding the piano example. I’m not going to repeat it again as that would just be redundant, but if you read carefully in the example, you see that there is an empirical evidence of recursive self improvement. This isn’t a matter of confusion.
The pianist may seem like they are just practicing to get better, practing some more to get even better, as you say. However, if you look at the final product (the highly experienced pianist) he isn’t just better at playing—he is also much better at learning to play better. This is RSI, even though his intentions (as you correctly say) may not be explicitly set up to achieve RSI.
I reread it (here, right?) and I don’t see anything about recursion.
Yes, a master pianist can learn a new piece faster than a novice can, but this is merely… let’s call it concentric self-improvement. The master is (0) good at playing piano, (1) good at learning to do 0, (2) good at learning to do 1, etc., for finitely many levels in a strict, non-tangled hierarchy.
This is fundamentally different-in-kind from being (0) good at playing piano, and (1) good at learning to do 0 and 1. ISI grows linearly, CSI grows polynomially (of potentially very large degree), and RSI grows superexponentially.
We self improve. Recursive self improvement means improving our means of self improvement. Reading about tactics for efficient self improvement would be recursive self improvement. Altering your neurology to gain eidetic memory so you can remember all the answers to the test, while using your neurology to figure out how to do it, would be recursive self improvement (of the kind we don’t have). Then, with the comprehension problems that eidetics get you keep on altering your neurology.
It’s like eating your dogfood in computer parlance.
As you say, humans can certainly improve their means of self improvement. They do that by things like learning lanugages, learning thinking tools, and inventing new thinking tools that are then passed down the generations.
IMO, those who want “recursive self improvement” to refer to something that doesn’t yet exist must tie themselves in knots with counter-intuitive and non-literal conceptions of what that phrase means.
I agree with you and it’s a good distinction to make. But I think that it is not trivial to just divide out how much of what we do is self-improvement, and how much is recursive self-improvement. As you say, it is definitely possible for us to do recursive self-improvement (meta-learning, or however you want to call it.) That’s really all that needs to be said to reiterate that humans stand as a sort of natural GAI.
I do think, as with the case of a master pianist, or all sorts of trades, that our ability to learn increases with our actual understanding. So the master pianist has not just self-improved, but he has also gained the ability to play highly technical pieces on sight—which represents the action of recursive self-improvement. He has learned how to learn pieces much easier, faster, and better.
There is no adult master pianist whose ability to learn new pieces is orders of magnitude better than that of a 12-year old prodigy (i.e., the same master pianist when they were 12 years old). The primary difference between them is not their ability to learn, but how much they have learned—i.e., pianistic technique, non-pianistic skills related to general musicianship, musical interpretation and style, etc.
Recursive self-improvement isn’t completely well defined and I was only making the point that the learning process for humans involves some element of recursive self improvement. The piano example at this point is no longer entirely useful, because we are just picking and choosing any kind of more specific example to suit our personal opinions. For example, I could reply that you are wrong to contrast the child prodigy with the master pianist, because that confuses the intended comparison between a pianist and a non-pianist. The point of the example is that any experienced pianist can learn new pieces far, far faster than a noob. Since learning new pieces amounts to more knowledge and more experience, more technique, poise, and so on, this process equates to self-improvement. Thus, the experienced pianist has definitely achieved a level of meta-improvement, or improving his ability to improve. However, you could reply that the experienced pianist no longer continues his meta-learning process, (as compared to the prodigy), so therefore the sense of recursive self-improvement has been irrepairably weakened and no longer retains the same level of significance as we are trying to attach to the term. In other words, you might claim that humans don’t have the required sense of longevity to their recursive self improvement. In any case, let’s return to the main point.
The main point is that humans do recursively self improve, on some level, in some fasion. Why should we expect a formal computer that recursively self improves to reach some greater heights?
I realize that there is somewhat of a problem with my original question in that it is too large in scope, perhaps too fundamental for this kind of small, bullet point type of Q&A. Still, it would be nice if people could maybe give more references or something more serious in order to educate me.
There are many reasons, but here are just a few that should be sufficient: it’s much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it’s purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn’t mean that it’s any smarter. Hence it can’t all of a sudden use this technique to RSI “foom”.
Third, making copies of itself is a non-trivial activity with which amounts to self-simulating itself, which amounts to an exponential reduction in its processing power available. I don’t see the GAI being able to make copies of itself much easier than say, two humans …reproducing… and waiting 9 months to get a baby.
it’s conjecture, yes, but not pure conjecture. Natural selection doesn’t optimize, it satisfices, and the slow process of accreting new features and repurposing existing systems for alternative uses ensures that there’s lots of redundancy, with lots of room for simplification and improvement. When has the artificial solution ever been as complex as the naturally evolved alternative it replaced, and why should the human brain be any different?
Intelligence tests are timed for a reason, and that’s because speed is one aspect of intelligence. If the program is smart enough (which it is by hypothesis) that it will eventually comes across the right theory, consider the right hypothesis, develop the appropriate mathematics, etc., at some point (just as we might argue the smartest human beings are), then more processing power results in that happening much faster, since the many dead ends can be reached faster, and the alternatives explored more quickly.
Making a copy of itself requires a handful of machine instructions, and sending that copy to a new processing node with instructions on what hypotheses to investigate is a few more instructions. I feel like I’m being trolled here, with the suggestion that copying a big number in computer memory from one location to another can’t be done any more easily than creating a human baby (and don’t forget educating it for 20 years).
And yet its source code is much more comprehensible (and, crucially, much more maintainable) than the DNA of even a very simple single-celled organism.
Re: Why should we expect a formal computer that recursively self improves to reach some greater heights?
Google has already self-improved to much greater heights. Evolution apparently favours intelligence—and Google has substantial resources, isn’t constrained by the human birth canal, and can be easily engineered.
Learning things can itself help improve your ability to learn new things. The classic example of this is language—but much the same applies to musical skills.
What do “orders of magnitude” have to do with the issue? Surely that’s the concept of “self-improvement by orders of magnitude” instead.
Also, on what scale are you measuring? Adult master pianists can probably learn pieces in days which a 12-year old would literally take years to master the skills to be able to perform—so I am sceptical about the “orders of magnitude” claim.
The measure I had in mind was how long it takes to learn a new piece from scratch so that you can perform it to the absolute best of your current abilities. It’s true that the abilities themselves continue to increase past age 12, which for the moment may preclude certain things that are beyond the current ability level, but the point is that the rate of learning of everything the 12-year old has the technique for is not radically different than that of the adult. There are no quantum leaps in rate of learning, as would be expected if we were dealing with recursive self-improvement that iterated many times.
Humans certainly have their limits. However, computers can learn music in microseconds—and their abilities to learn rapidly are growing ever faster.
I think to argue that there is not yet recursive self-improvement going on, then at the very least, you have to stick to those things that the “machine” part of the man-machine symbiosis can’t yet contribute towards.
Of course, that does NOT include important things like designing computers, making CPUs, or computer programming.
Evolution hasn’t stopped.
Irrelevant. Timescale of human evolution is far, far longer than the projections for AI development. For all intensive purposes, it has stopped.
Furthermore, even though evolution may not have stopped, I think it is obvious that it has changed (or selection pressures) changed so much that its modern implications are unclear.
“Intents and purposes”.
It is irrelevant to us. It is highly relevant to your claims in the previous post.
It appears you are trying to shift the emphasis from the argument itself to the particular semantics of how things are being said. This is undesireable. I am speaking about the irrelevancy of his argument, not the irrelevancy of his statement. His statement is clearly relevant. To rehash—Despite the fact that evolution is still going—on a suitably local (say, 100,000 years) timescale—humans have reached a major plateau in intelligence.
Quite the reverse.
No they haven’t. Whatever effect the the currently volatile evolutionary pressures may have on human intelligence a ‘major plateau’ would be incredible.
Please elaborate as I fail to understand your reasoning.
The reason we care about whether evolution has stopped is that we care how significant the level of current human intelligence is. So yes, the current level is very significant in that it can be determined given only what century it is; that doesn’t mean it’s significant in that it’s likely that self-improving artificial intelligence will hit a snag there.