I suppose I expect recursive self-improvement to play out in the course of months not years. And I worry groups like OpenAI are insane enough to pursue recursive self improvement as an explicit engineering goal. (Altman seems to be a moral realist, explicitly says he thinks the orthogonality thesis is false.) From the outside, it will appear instant as there will be a perceived discontinuity when the fact that it has achieved a decisive strategic advantage becomes obvious.
Well again remember a nuclear device is a critical mass of weapons grade material.
Anything less than weapons grade and nothing happens.
Anything less than sudden explosive combination of the materials and the device will heat itself up and blast itself apart with sub kiloton yield.
So analogy wise : current llms can “babble” out code that sometimes even works. They are not trained on RL selecting for correct and functional code.
Self improvement by code generation isn’t yet possible.
Other groups have tried making neural networks composable, and using one neural network based agent to design others. It is also not good enough for recursion but this is how autoML works.
Basically our enrichment isn’t high enough and so nothing will happen. The recursion quenches itself before it can start, the first generation output isn’t even functional.
But yes, at some future point in time it WILL be strong enough and crazy shit will happen. I mean think about the nuclear example: all those decades of discovering nuclear physics, fission, the chain reaction, building a nuclear reactor, purifying the plutonium...all that time and the interesting event happened in milliseconds.
I suppose I expect recursive self-improvement to play out in the course of months not years. And I worry groups like OpenAI are insane enough to pursue recursive self improvement as an explicit engineering goal. (Altman seems to be a moral realist, explicitly says he thinks the orthogonality thesis is false.) From the outside, it will appear instant as there will be a perceived discontinuity when the fact that it has achieved a decisive strategic advantage becomes obvious.
Well again remember a nuclear device is a critical mass of weapons grade material.
Anything less than weapons grade and nothing happens.
Anything less than sudden explosive combination of the materials and the device will heat itself up and blast itself apart with sub kiloton yield.
So analogy wise : current llms can “babble” out code that sometimes even works. They are not trained on RL selecting for correct and functional code.
Self improvement by code generation isn’t yet possible.
Other groups have tried making neural networks composable, and using one neural network based agent to design others. It is also not good enough for recursion but this is how autoML works.
Basically our enrichment isn’t high enough and so nothing will happen. The recursion quenches itself before it can start, the first generation output isn’t even functional.
But yes, at some future point in time it WILL be strong enough and crazy shit will happen. I mean think about the nuclear example: all those decades of discovering nuclear physics, fission, the chain reaction, building a nuclear reactor, purifying the plutonium...all that time and the interesting event happened in milliseconds.