If you define an improvement of intelligence as being like optimizing a bunch of algorithms, such that you can do more, or the same, with fewer computes (approaching maximum compression) then the chances of a hard takeoff appear grim indeed.
Experience with optimizing systems such as genetics algorithms/programming suggests that the rate of improvement in performance decreases over time and becomes more and more tortuous with small improvements occurring less frequently. There may be occasional discoveries of new vistas on the fitness landscape, but these are not common as we march towards optimum compression.
Thus far nobody seems to have really been able to address this problem of a general optimizer which doesn’t run out of steam over time. To show that a hard takeoff is possible, at least in principle, it’s going to be necessary to demonstrate that you can devise an optimizer which doesn’t run out of steam, and in fact does the opposite.
Here’s Bob Mottram making much the same point as I just made: