Funny how at first it seemed obvious to me what “recursive self-improvement” means, and now...
On one end of the scale, almost any improvement will indirectly help at designing AI. Even if you invent a sharper pencil or a more nutritious version of soylent using the latest AI, it may ultimately help the AI developers become 0.000001% more productive.
The other end of the scale, I guess we could call it “fully automated recursive self-improvement” is where the AI creates the next generation of AI without any human input. Maybe with some extra requirements, such as reliability (as opposed to e.g. 10% probability of hallucinating a solution that couldn’t possibly work, and then the AI happily replacing itself with the “improved” version). Not sure if we also require the AI to also physically build the next version, and to organize the entire economy.
But the real… thing that what we might reasonably call “recursive self-improvement”… is probably somewhere in between. If the AI can create a better design of some aspect of itself, then we are already there; now the question is whether it can improve all of its aspects, and whether it hits diminishing returns on that.
I think one way of framing it is whether the improvements to itself outweigh the extra difficulty in eking out more performance. Basically does the performance converge or diverge.
Funny how at first it seemed obvious to me what “recursive self-improvement” means, and now...
On one end of the scale, almost any improvement will indirectly help at designing AI. Even if you invent a sharper pencil or a more nutritious version of soylent using the latest AI, it may ultimately help the AI developers become 0.000001% more productive.
The other end of the scale, I guess we could call it “fully automated recursive self-improvement” is where the AI creates the next generation of AI without any human input. Maybe with some extra requirements, such as reliability (as opposed to e.g. 10% probability of hallucinating a solution that couldn’t possibly work, and then the AI happily replacing itself with the “improved” version). Not sure if we also require the AI to also physically build the next version, and to organize the entire economy.
But the real… thing that what we might reasonably call “recursive self-improvement”… is probably somewhere in between. If the AI can create a better design of some aspect of itself, then we are already there; now the question is whether it can improve all of its aspects, and whether it hits diminishing returns on that.
That may be too strong of a statement. Say some new tool helps improve AI legislation more than AI design, this might turn slowing down the wheel.
I think one way of framing it is whether the improvements to itself outweigh the extra difficulty in eking out more performance. Basically does the performance converge or diverge.