Why did past-MIRI talk so much about recursive self-improvement? Was it because Eliezer was super confident that humanity was going to get to AGI via the route of a seed AI that understands its own source code?
I doubt it. My read is that Eliezer did have “seed AI” as a top guess, back before the deep learning revolution. But I don’t think that’s the main source of all the discussion of recursive self-improvement in the period around 2008.
Rather, my read of the history is that MIRI was operating in an argumentative argument where:
Ray Kurzweil was claiming things along the lines of ‘Moore’s Law will continue into the indefinite future, even past the point where AGI can contribute to AGI research.’ (The Five Theses, in 2013, is a list of the key things Kurzweilians were getting wrong.)
Robin Hanson was claiming things along the lines of ‘The power is in the culture; superintelligences wouldn’t be able to outstrip the rest of humanity.’
The memetic environment was one where most people were either ignoring the topic altogether, or asserting ‘AGI cannot fly all that high’, or asserting ‘AGI flying high would be business-as-usual (e.g., with respect to growth rates)’.
The weighty conclusion of the “recursive self-improvement” meme is not “expect seed AI”. The weighty conclusion is “sufficiently smart AI will rapidly improve to heights that leave humans in the dust”.
Note that this conclusion is still, to the best of my knowledge, completely true, and recursive self-improvement is a correct argument for it.
I disagree here. Foom or hard takeoff do not follow from recursive self-improvement.
Recursive self-improvement induced foom implies that marginal returns to cognitive reinvestment are increasing not diminishing across the relevant cognitive intervals. I don’t think that position has been well established.
Furthermore, even if marginal returns to cognitive reinvestment are increasing, it does not necessarily follow that marginal returns to real world capability from cognitive investment are also increasing across the relevant intervals. For example, marginal returns to predictive accuracy in a given domain diminish, and they diminish at an exponential rate (this seems to be broadly true across all relevant cognitive intervals).
This is not necessarily to criticise Yudkowsky’s arguments in the context in which it appeared in 2008 − 2013. I’m replying as a LessWronger who has started thinkingabout takeoff dynamicsin more detail and is dissatisfied with those arguments and the numerous implicit assumptions Yudkowsky made that I find unpersuasive when laid out explicitly.
I mention this so that it’s clear that I’m not pushing back against the defence of RSI, but expressing my dissatisfaction with the arguments in favour of RSI as presented.
This is a very good point DragonGod. I agree that the necessary point of increasing marginal returns to cognitive reinvestment has not been convincingly (publicly) established. I fear that publishing a sufficiently convincing argument (which would likely need to include empirical evidence from functional systems) would be tantamount to handing out the recipe for this RSI AI.
I disagree here. Foom or hard takeoff do not follow from recursive self-improvement.
Recursive self-improvement induced foom implies that marginal returns to cognitive reinvestment are increasing not diminishing across the relevant cognitive intervals. I don’t think that position has been well established.
Furthermore, even if marginal returns to cognitive reinvestment are increasing, it does not necessarily follow that marginal returns to real world capability from cognitive investment are also increasing across the relevant intervals. For example, marginal returns to predictive accuracy in a given domain diminish, and they diminish at an exponential rate (this seems to be broadly true across all relevant cognitive intervals).
This is not necessarily to criticise Yudkowsky’s arguments in the context in which it appeared in 2008 − 2013. I’m replying as a LessWronger who has started thinking about takeoff dynamics in more detail and is dissatisfied with those arguments and the numerous implicit assumptions Yudkowsky made that I find unpersuasive when laid out explicitly.
I mention this so that it’s clear that I’m not pushing back against the defence of RSI, but expressing my dissatisfaction with the arguments in favour of RSI as presented.
This is a very good point DragonGod. I agree that the necessary point of increasing marginal returns to cognitive reinvestment has not been convincingly (publicly) established. I fear that publishing a sufficiently convincing argument (which would likely need to include empirical evidence from functional systems) would be tantamount to handing out the recipe for this RSI AI.