An AI that reaches a certain point in its own development becomes able to improve itself. At this point, recursive cascades slam over many internal growth curves to near the limits of their current hardware, and the AI undergoes a vast increase in capability.
This seems like the first problem I detected. An intelligence being able to improve itself does not necessarily lead to a recursive cascade of self-improvement—since it may only be able to improve some parts of itself—and it’s quite possible that after it has done those improvements, it can’t do any more.
Say that machine intelligence learns how to optimise FOR loops, eliminatining unnecessary conditions, etc. Presto, it can optimise its entire codebase—and thus improve itself. However, that doesn’t lead to a self-improving recursive cascade—because it only improved itself in one way, and that was a rather limited way. Of course this kind of improvement has been going on for decades—via lint tools and automatic refactoring.
As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop—but before that there will have been much automated improvement of machines by machines—and after that there may still be human code reviews.
This is not the first time I have made this point here. It does not seem especially hard to understand to me—but yet the conversation sails gaily onwards, with no coherent criticism, and no sign of people updating their views: it feels like talking to a wall.
In order to learn how to optimize FOR loops it would have to be pretty intelligent and have general learning ability. So it wouldn’t just stop after learning that, it would go on to learn more things at increased speed. Learning the first optimization would let it learn more optimizations even faster than it otherwise would have. The second optimization it makes helps it learn the third even faster and so on.
It’s not clear to me how fast this process would be. Just because it learns the next optimization even faster than it otherwise would have taken, doesn’t mean it wouldn’t have taken a long time to begin with. It could take years for it to improve to super-human abilities, or it could take days. It depends on stuff like how long it takes the average optimization it learns to pay back the time it took to research it. As well as the distribution of optimizations; maybe after learning the first few they get progressively more difficult to discover and give less and less value in return.
It seems to my intuition that this process would be very fast and get very far before hitting limits, though I can’t prove that. But I would point to other exponential processes to compare it to like compound interest.
This seems like the first problem I detected. An intelligence being able to improve itself does not necessarily lead to a recursive cascade of self-improvement—since it may only be able to improve some parts of itself—and it’s quite possible that after it has done those improvements, it can’t do any more.
Say that machine intelligence learns how to optimise FOR loops, eliminatining unnecessary conditions, etc. Presto, it can optimise its entire codebase—and thus improve itself. However, that doesn’t lead to a self-improving recursive cascade—because it only improved itself in one way, and that was a rather limited way. Of course this kind of improvement has been going on for decades—via lint tools and automatic refactoring.
As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop—but before that there will have been much automated improvement of machines by machines—and after that there may still be human code reviews.
This is not the first time I have made this point here. It does not seem especially hard to understand to me—but yet the conversation sails gaily onwards, with no coherent criticism, and no sign of people updating their views: it feels like talking to a wall.
In order to learn how to optimize FOR loops it would have to be pretty intelligent and have general learning ability. So it wouldn’t just stop after learning that, it would go on to learn more things at increased speed. Learning the first optimization would let it learn more optimizations even faster than it otherwise would have. The second optimization it makes helps it learn the third even faster and so on.
It’s not clear to me how fast this process would be. Just because it learns the next optimization even faster than it otherwise would have taken, doesn’t mean it wouldn’t have taken a long time to begin with. It could take years for it to improve to super-human abilities, or it could take days. It depends on stuff like how long it takes the average optimization it learns to pay back the time it took to research it. As well as the distribution of optimizations; maybe after learning the first few they get progressively more difficult to discover and give less and less value in return.
It seems to my intuition that this process would be very fast and get very far before hitting limits, though I can’t prove that. But I would point to other exponential processes to compare it to like compound interest.