But, as EY points out, there may be some upcoming aspects of AI technology evolution which can have the same dramatic effects. Not self-modifying code, but maybe high bandwidth networks or ultra-fine-grained parallel processing. Eliezer hasn’t convinced me that a FOOM is inevitable, but you have come nowhere near convincing me that another one is very unlikely.
High-bandwidth networks and parallel processing have fit perfectly well within the curve of capability thus far.
If you aren’t convinced yet that another one is very unlikely, okay, what would convince you? Formal proof of a negative isn’t possible outside pure mathematics.
If you aren’t convinced yet that another one is very unlikely, okay, what would convince you?
I’m open to the usual kinds of Bayesian evidence. Lets see. H is “there will be no more FOOMs”. What do you have in mind as a good E? Hmm, lets see. How will the world be observably different if you are right, from how it will look if you are wrong?
Point out such an E, and then observe it, and you may sway me to your side.
Removing my tongue from my cheek, I will make an observation. I’m sure that you have heard the statement “Extraordinary claims require extraordinary evidence.” Well, there is another kind of claim that requires extraordinary evidence. Claims of the form “We don’t have to worry about that, anymore.”
Removing my tongue from my cheek, I will make an observation. I’m sure that you have heard the statement “Extraordinary claims require extraordinary evidence.” Well, there is another kind of claim that requires extraordinary evidence. Claims of the form “We don’t have to worry about that, anymore.”
How will the world be observably different if you are right, from how it will look if you are wrong?
If I’m wrong, then wherever we can make use of some degree of recursive self-improvement—to the extent that we can close the loop, feed the output of an optimization process into the process itself, as in e.g. programming tools, chip design and Eurisko—we should be able to break the curve of capability and demonstrate sustained faster than exponential improvement.
If I’m right, then the curve of capability should hold in all cases, even when some degree of recursive self-improvement is in operation, and steady exponential improvement should remain the best we can get.
All the evidence we have thus far, supports the latter case, but I’m open to—and would very much like—demonstrations of the contrary.
I’m sure that you have heard the statement “Extraordinary claims require extraordinary evidence.” Well, there is another kind of claim that requires extraordinary evidence. Claims of the form “We don’t have to worry about that, anymore.”
If I’m wrong, then wherever we can make use of some degree of recursive self-improvement—to the extent that we can close the loop, feed the output of an optimization process into the process itself, as in e.g. programming tools, chip design and Eurisko—we should be able to break the curve of capability and demonstrate sustained faster than exponential improvement.
Then maybe I misunderstood your claim, because I thought you had claimed that there are no kinds of recursive self-improvement that break your curve. Or at least no kinds of recursive self-improvement that are relevant to a FOOM.
To be honest, my intuition is that recursive self-improvement opportunities generating several orders of magnitude of improvement must be very rare. And where the do exist, there probably must be significant “overhang” already in place to make them FOOM-capable. So a FOOM strikes me as unlikely. But your posting here hasn’t led me to consider it any less likely than I had before.
Your “curve of capability” strikes me as a rediscovery of something economists have known about for years—the “law of diminishing returns”. Since my economics education took place more than 40 years ago, “diminishing returns” is burnt deep into my intuitions. The trouble is that “diminishing returns” is not a really a law. It is, like your capability curve, more of a rough empirical observation—though admitted one with lots of examples to support it.
What I hear is that, since I got my degree and formed my intuitions, economists have been exploring the possibility of “increasing returns”. And they find examples of it practically everywhere that people are climbing up a new-technology learning curve. In places like electronics and biotech. They are seeing the phenomenon in almost every new technology. Even without invoking recursive self-improvement. But so far, not in AI. That seems to be the one new industry that is still stumbling around in the dark. Kind of makes you wonder what will happen when you guys finally find the light switch.
Then maybe I misunderstood your claim, because I thought you had claimed that there are no kinds of recursive self-improvement that break your curve. Or at least no kinds of recursive self-improvement that are relevant to a FOOM.
That is what I’m claiming, so if you can demonstrate one, you’ll have falsified my theory.
Your “curve of capability” strikes me as a rediscovery of something economists have known about for years—the “law of diminishing returns”.
I don’t think so, I think that’s a different thing. In fact...
What I hear is that, since I got my degree and formed my intuitions, economists have been exploring the possibility of “increasing returns”.
… I would’ve liked to use the law of increasing returns as a positive example, but I couldn’t find a citation. The version I remember reading about (in a paper book, back in the 90s) said that every doubling of the number of widgets you make, lets you improve the process/cut costs/whatever, by a certain amount; and that this was remarkably consistent across industries—so once again we have the same pattern, double the optimization effort and you get a certain degree of improvement.
That would look linear on a log-log graph. A power-law response.
I understood rwallace to be hawking a “curve of capability” which looks linear on a semi-log graph. A logarithmic response.
Of course, one of the problems with rwallace’s hypothesis is that it becomes vague when you try to quantify it. “Capability increases by the same amount with each doubling of resources” can be interpreted in two ways. “Same amount” meaning “same percentage”, or meaning literally “same amount”.
Right, to clarify, I’m saying the curve of capability is a straight line on a log-log graph, perhaps the clearest example being the one I gave of chip design, which gives repeated doublings of output for doublings of input. I’m arguing against the “AI foom” notion of faster growth than that, e.g. each doubling taking half the time of the previous one.
I’m saying the curve of capability is a straight line on a log-log graph
So this could be falsified by continous capability curves that curve upwards on a log-log graphs, and you arguments in various other threads that the discussed situations result in continous capability curves are not strong enough to support your theory.
Some models of communication equipment suggest high return rates for new devices since the number of possible options increases at the square of the number of people with the communication system. I don’t know if anyone has looked at this in any real detail although I would naively guess that someone must have.
But, as EY points out, there may be some upcoming aspects of AI technology evolution which can have the same dramatic effects. Not self-modifying code, but maybe high bandwidth networks or ultra-fine-grained parallel processing. Eliezer hasn’t convinced me that a FOOM is inevitable, but you have come nowhere near convincing me that another one is very unlikely.
High-bandwidth networks and parallel processing have fit perfectly well within the curve of capability thus far.
If you aren’t convinced yet that another one is very unlikely, okay, what would convince you? Formal proof of a negative isn’t possible outside pure mathematics.
I’m open to the usual kinds of Bayesian evidence. Lets see. H is “there will be no more FOOMs”. What do you have in mind as a good E? Hmm, lets see. How will the world be observably different if you are right, from how it will look if you are wrong?
Point out such an E, and then observe it, and you may sway me to your side.
Removing my tongue from my cheek, I will make an observation. I’m sure that you have heard the statement “Extraordinary claims require extraordinary evidence.” Well, there is another kind of claim that requires extraordinary evidence. Claims of the form “We don’t have to worry about that, anymore.”
IWICUTT.
(I Wish I Could Upvote This Twice.)
If I’m wrong, then wherever we can make use of some degree of recursive self-improvement—to the extent that we can close the loop, feed the output of an optimization process into the process itself, as in e.g. programming tools, chip design and Eurisko—we should be able to break the curve of capability and demonstrate sustained faster than exponential improvement.
If I’m right, then the curve of capability should hold in all cases, even when some degree of recursive self-improvement is in operation, and steady exponential improvement should remain the best we can get.
All the evidence we have thus far, supports the latter case, but I’m open to—and would very much like—demonstrations of the contrary.
I address that position here and here.
Then maybe I misunderstood your claim, because I thought you had claimed that there are no kinds of recursive self-improvement that break your curve. Or at least no kinds of recursive self-improvement that are relevant to a FOOM.
To be honest, my intuition is that recursive self-improvement opportunities generating several orders of magnitude of improvement must be very rare. And where the do exist, there probably must be significant “overhang” already in place to make them FOOM-capable. So a FOOM strikes me as unlikely. But your posting here hasn’t led me to consider it any less likely than I had before.
Your “curve of capability” strikes me as a rediscovery of something economists have known about for years—the “law of diminishing returns”. Since my economics education took place more than 40 years ago, “diminishing returns” is burnt deep into my intuitions. The trouble is that “diminishing returns” is not a really a law. It is, like your capability curve, more of a rough empirical observation—though admitted one with lots of examples to support it.
What I hear is that, since I got my degree and formed my intuitions, economists have been exploring the possibility of “increasing returns”. And they find examples of it practically everywhere that people are climbing up a new-technology learning curve. In places like electronics and biotech. They are seeing the phenomenon in almost every new technology. Even without invoking recursive self-improvement. But so far, not in AI. That seems to be the one new industry that is still stumbling around in the dark. Kind of makes you wonder what will happen when you guys finally find the light switch.
That is what I’m claiming, so if you can demonstrate one, you’ll have falsified my theory.
I don’t think so, I think that’s a different thing. In fact...
… I would’ve liked to use the law of increasing returns as a positive example, but I couldn’t find a citation. The version I remember reading about (in a paper book, back in the 90s) said that every doubling of the number of widgets you make, lets you improve the process/cut costs/whatever, by a certain amount; and that this was remarkably consistent across industries—so once again we have the same pattern, double the optimization effort and you get a certain degree of improvement.
I think I read that, too, and the claimed improvement was 20% with each doubling.
That would look linear on a log-log graph. A power-law response.
I understood rwallace to be hawking a “curve of capability” which looks linear on a semi-log graph. A logarithmic response.
Of course, one of the problems with rwallace’s hypothesis is that it becomes vague when you try to quantify it. “Capability increases by the same amount with each doubling of resources” can be interpreted in two ways. “Same amount” meaning “same percentage”, or meaning literally “same amount”.
Right, to clarify, I’m saying the curve of capability is a straight line on a log-log graph, perhaps the clearest example being the one I gave of chip design, which gives repeated doublings of output for doublings of input. I’m arguing against the “AI foom” notion of faster growth than that, e.g. each doubling taking half the time of the previous one.
So this could be falsified by continous capability curves that curve upwards on a log-log graphs, and you arguments in various other threads that the discussed situations result in continous capability curves are not strong enough to support your theory.
Some models of communication equipment suggest high return rates for new devices since the number of possible options increases at the square of the number of people with the communication system. I don’t know if anyone has looked at this in any real detail although I would naively guess that someone must have.