The following seems a bit unclear to me, and might warrant an update–if I am not alone in the assessment:
Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), [...], then we should still expect very rapid technological development [...] once AI meaningfully substitutes for human researchers.
I might just be taking issue with the word “without” and taking it in a very literal sense, but to me “AI meaningfully substituting for human researchers” implies at least a weak form of recursive self-improvement. That is, I would be quite surprised if the world allowed for AI to become as smart as human researchers but no smarter afterwards.
The following seems a bit unclear to me, and might warrant an update–if I am not alone in the assessment:
I might just be taking issue with the word “without” and taking it in a very literal sense, but to me “AI meaningfully substituting for human researchers” implies at least a weak form of recursive self-improvement.
That is, I would be quite surprised if the world allowed for AI to become as smart as human researchers but no smarter afterwards.