I agree that the AI cannot improve literally forever. At some moment it will hit a limit, even if that limit is that it became near perfect already, so there is nothing to improve, or the tiny remaining improvements would not be worth their cost in resources. So, S-curve it is, in long term.
But for practical purposes, the bottom part of the S-curve looks similar to the exponential function. So if we happen to be near that bottom, it doesn’t matter that the AI will hit some fundamental limit on self-improvement around 2200 AD, if it already successfully wiped out humanity in 2045.
So the question is in which part of the S-curve we are now, and whether the AI explosion hits diminishing returns soon enough, i.e. before the things AI doomers are afraid of could happen. If it happens later, that is a small consolation.
I agree that the AI cannot improve literally forever. At some moment it will hit a limit, even if that limit is that it became near perfect already, so there is nothing to improve, or the tiny remaining improvements would not be worth their cost in resources. So, S-curve it is, in long term.
But for practical purposes, the bottom part of the S-curve looks similar to the exponential function. So if we happen to be near that bottom, it doesn’t matter that the AI will hit some fundamental limit on self-improvement around 2200 AD, if it already successfully wiped out humanity in 2045.
So the question is in which part of the S-curve we are now, and whether the AI explosion hits diminishing returns soon enough, i.e. before the things AI doomers are afraid of could happen. If it happens later, that is a small consolation.