there’s a significant difference between the capability of a program you can write in one year versus two years
The program that the AI is writing is itself, so the second half of those two years takes less than one year—as determined by the factor of “significant difference”. And if there’s a significant difference from 1 to 2, there ought to be a significant difference from 2 to 4 as well, no? But the time taken to get from 2 to 4 is not two years; it is 2 years divided by the square of whatever integer you care to represent “significant difference” with. And then from 4 to 8, is (4 years / sigdif^3). You see where this goes?
EDIT: I see that you addressed recursion in posts further down. Compound interest does not follow your curve of capability; compound interest is an example of strong recursion: money begets money. Weak recursion: money buys hardware to run software to aid human to design new hardware to earn money to buy hardware. Strong recursion: Software designs new software to design newer software; money begets money begets more money. Think of the foom as compound interest on intelligence.
Compound interest is a fine analogy, it delivers a smooth exponential growth curve, and we have seen technological progress do likewise.
What I am arguing against is the “AI foom” claims that you can get faster growth than this, e.g. each successive doubling taking half the time. The reason this doesn’t work is that each successive doubling is exponentially harder.
What if the output feeds back into the input, so the system as a whole is developing itself? Then you get the curve of capability, which is a straight line on a log-log graph, which again manifests itself as smooth exponential growth.
Strong recursion: Software designs new software to design newer software; money begets money begets more money. Think of the foom as compound interest on intelligence.
Suppose A designs B, which then designs C. Why does it follow that C is more capable than B (logically, disregarding any hardware advances made between B and C)? Alternatively, why couldn’t A have designed C initially?
It does not necessarily follow; but the FOOM contention is that once A can design a B more capable than itself, B’s increased capability will include the capability to design C, which would have been impossible for A. C can then design D, which would have been impossible for B and even more impossible for A.
Currently, each round of technology aids in developing the next, but the feedback isn’t quite this strong.
As per khafra’s post, though I would add that it looks likely: after all, that we as humans are capable of any kind of AI at all is proof that designing intelligent agents is the work of intelligent agents. It would be surprising if there was some hard cap on how intelligent an agent you can make—like if it topped out at exactly your level or below.
The program that the AI is writing is itself, so the second half of those two years takes less than one year—as determined by the factor of “significant difference”. And if there’s a significant difference from 1 to 2, there ought to be a significant difference from 2 to 4 as well, no? But the time taken to get from 2 to 4 is not two years; it is 2 years divided by the square of whatever integer you care to represent “significant difference” with. And then from 4 to 8, is (4 years / sigdif^3). You see where this goes?
EDIT: I see that you addressed recursion in posts further down. Compound interest does not follow your curve of capability; compound interest is an example of strong recursion: money begets money. Weak recursion: money buys hardware to run software to aid human to design new hardware to earn money to buy hardware. Strong recursion: Software designs new software to design newer software; money begets money begets more money. Think of the foom as compound interest on intelligence.
Compound interest is a fine analogy, it delivers a smooth exponential growth curve, and we have seen technological progress do likewise.
What I am arguing against is the “AI foom” claims that you can get faster growth than this, e.g. each successive doubling taking half the time. The reason this doesn’t work is that each successive doubling is exponentially harder.
What if the output feeds back into the input, so the system as a whole is developing itself? Then you get the curve of capability, which is a straight line on a log-log graph, which again manifests itself as smooth exponential growth.
Suppose A designs B, which then designs C. Why does it follow that C is more capable than B (logically, disregarding any hardware advances made between B and C)? Alternatively, why couldn’t A have designed C initially?
It does not necessarily follow; but the FOOM contention is that once A can design a B more capable than itself, B’s increased capability will include the capability to design C, which would have been impossible for A. C can then design D, which would have been impossible for B and even more impossible for A.
Currently, each round of technology aids in developing the next, but the feedback isn’t quite this strong.
As per khafra’s post, though I would add that it looks likely: after all, that we as humans are capable of any kind of AI at all is proof that designing intelligent agents is the work of intelligent agents. It would be surprising if there was some hard cap on how intelligent an agent you can make—like if it topped out at exactly your level or below.