One source of diminishing returns is upper limits on what is achievable. For instance, Shannon proved that there is an upper bound on the error-free communicative capacity of a channel. No amount of intelligence can squeeze more error-free capacity out of a channel than this. There are also limits on what is learnable using just induction, even with unlimited resources and unlimited time (cf “The Logic of Reliable Inquiry” by Kevin T. Kelly). These sorts of limits indicate that an AI cannot improve its meta-cognition exponentially forever. At some point, the improvements have to level off.
One source of diminishing returns is upper limits on what is achievable. For instance, Shannon proved that there is an upper bound on the error-free communicative capacity of a channel. No amount of intelligence can squeeze more error-free capacity out of a channel than this. There are also limits on what is learnable using just induction, even with unlimited resources and unlimited time (cf “The Logic of Reliable Inquiry” by Kevin T. Kelly). These sorts of limits indicate that an AI cannot improve its meta-cognition exponentially forever. At some point, the improvements have to level off.