My point is that that heuristic is not good. This obviously doesn’t mean that reversing the heuristic would give you good results (reverse stupidity is not intelligence and so on). What one needs is a different set of heuristics.
If you extrapolate capability graphs in the most straightforward way, you get the result that AGI should arrive around 2027-2028. Scenario analyses (like the ones produced by Kokotajlo and Aschenbrenner) tend to converge on the same result.
An effective cancer cure will likely require superintelligence, so I would be expecting one around 2029 assuming alignment gets solved.
We mostly solved egg frying and laundry folding last year with Aloha and Optimus, which were some of the most long-standing issues in robotics. So human level robots in 2024 would actually have been an okay prediction. Actual human level probably requires human level intelligence, so 2027.
Very interesting, thanks! On a quick skim, I don’t think I agree with the claim that LLMs have never done anything important. I know for a fact that they have written a lot of production code for a lot of companies, for example. And I personally have read AI texts funny or entertaining enough to reflect back on, and AI art beautiful enough to admire even a year later. (All of this is highly subjective, of course. I don’t think you’d find the same examples impressive.) If you don’t think any of that qualifies as important, then I think your definition of important may be overly broad.
But I’ll have to look at this more deeply later.
I think this reasoning would also lead one to reject Moore’s law as a valid way to forecast future compute prices. It is in some sense “obvious” what straight lines one should be looking at: smooth lines of technological progress. I claim that you can pick just about any capability with a sufficiently “smooth”, “continuous” definition (i.e. your example of the number of open mathematical theorems solved would have to be amended to allow for partial progress and partial solutions) will tend to converge around 2027-28. Some converge earlier, some later, but that seems to be around the consensus for when we can expect human-level capability for nearly all tasks anybody’s bothered to model.
The Mobile Aloha website: https://mobile-aloha.github.io/
The front page has a video of the system autonomously cooking a shrimp and other examples. It is still quite slow and clumsy, but being able to complete tasks like this at all is already light years ahead of where we were just a few years ago.
Oh, I know. It’s normally 5-20 years from lab to home. My 2027 prediction is for a research robot being able to do anything a human can do in an ordinary environment, not necessarily a mass-producable, inexpensive product for consumers or even most businesses. But obviously the advent of superintelligence, under my model, is going to accelerate those usual 5-20 year timelines quite a bit, so it can’t be much after 2027 that you’ll be able to buy your own android. Assuming “buying things” is still a thing, assuming the world remains recognizable for at least some years, and so on.