I agree with this, but I think you have to remember that many things with diminishing returns also have accelerating returns earlier on.
That is to say, logistic curves are all over the place. Business growth, practicing a new instrument, functionality of a software project over time, learning a language through immersion...
It’s absolutely plausible for intelligence self-improvement to work for a few IQ points and then peter out, for some architecture. Humans, for example, are horrible at improving their own brains—but also see EURISKO. But I’m skeptical that returns are always going to be so sharply diminishing, and if everyone else is improving slowly, whatever system “goes critical” first is going to be the one that matters.
I agree with this, but I think you have to remember that many things with diminishing returns also have accelerating returns earlier on.
That is to say, logistic curves are all over the place. Business growth, practicing a new instrument, functionality of a software project over time, learning a language through immersion...
It’s absolutely plausible for intelligence self-improvement to work for a few IQ points and then peter out, for some architecture. Humans, for example, are horrible at improving their own brains—but also see EURISKO. But I’m skeptical that returns are always going to be so sharply diminishing, and if everyone else is improving slowly, whatever system “goes critical” first is going to be the one that matters.