In my reading, I agree that the “Slow” scenario is pretty much the slowest it could be, since it posits an AI winter starting right now and nothing beyond making better use of what we already have.
Your “Fast” scenario is comparable with my “median” scenario: we do continue to make progress, but at a slower rate than the last two years. We don’t get AGI capable of being transformative in the next 3 years, despite going from somewhat comparable to a small child in late 2022 (though better in some narrow ways than an adult human) to better capabilities than average adult human in almost all respects in late 2024 (and better in some important capabilities than 99.9% of humans).
My “Fast” scenario is one in which internal deployment of AI models coming into existence in early-to-mid 2025 allow researchers to make large algorithmic and training improvements in the next generation (by late 2025) which definitely qualify as AGI. Those then assist to accelerate the pace of research with better understanding of how intelligence arises leading to major algorithmic and training improvements and indisputably superhuman ASI in 2026.
This Fast scenario’s ASI may not be economically transformative by then, because human economies are slow to move. I wouldn’t bet on 2027 being anything like 2026 in such a scenario, though.
I do have faster scenarios in mind too, but far more speculative. E.g. ones in which the models we’re seeing now are already heavily sandbagging and actually superhuman, or in which other organizations have such models privately.
better capabilities than average adult human in almost all respects in late 2024
I see people say things like this, but I don’t understand it at all. The average adult human can do all sorts of things that current AIs are hopeless at, such as planning a weekend getaway. Have you, literally you personally today, automated 90% of the things you do at your computer? If current AI has better capabilities than the average adult human, shouldn’t it be able to do most of what you do? (Setting aside anything where you have special expertise, but we all spend big chunks of our day doing things where we don’t have special expertise – replying to routine emails, for instance.)
My description “better capabilities than average adult human in almost all respects”, differs from “would be capable of running most people’s lives better than they could”. You appear to be taking these as synonymous.
The economically useful question is more along the lines of “what fraction of time taken on tasks could a business expect to be able to delegate to these agents for free vs a median human that they have to employ at socially acceptable wages” (taking into account supervision needs and other overheads in each case).
My guess is currently “more than half, probably not yet 80%”. There are still plenty of tasks that a supervised 120 IQ human can do that current models can’t. I do not think there will remain many tasks that a 100 IQ human can do with supervision that a current AI model cannot with the same degree of supervision, after adjusting processes to suit the differing strengths and weakness of each.
Your test does not measure what you think it does. There are people smarter than me who I could not and would not trust to make decisions about me (or my computer) in my life. So no. (Also note, I am very much not of average capability, and likewise for most participants on LessWrong)
I am certain that you also would not take a random person in the world of median capability and get them to do 90% of the things you do with your computer for you, even for free. Not without a lot of screening and extensive training and probably not even then.
However, it would not take much better reliability for other people to create economically valuable niches for AIs with such capability. It would take quite a long time, but even with zero increases in capability I think AI would be eventually be a major economic factor replacing human labour. Not quite transformative, but close.
In my reading, I agree that the “Slow” scenario is pretty much the slowest it could be, since it posits an AI winter starting right now and nothing beyond making better use of what we already have.
Your “Fast” scenario is comparable with my “median” scenario: we do continue to make progress, but at a slower rate than the last two years. We don’t get AGI capable of being transformative in the next 3 years, despite going from somewhat comparable to a small child in late 2022 (though better in some narrow ways than an adult human) to better capabilities than average adult human in almost all respects in late 2024 (and better in some important capabilities than 99.9% of humans).
My “Fast” scenario is one in which internal deployment of AI models coming into existence in early-to-mid 2025 allow researchers to make large algorithmic and training improvements in the next generation (by late 2025) which definitely qualify as AGI. Those then assist to accelerate the pace of research with better understanding of how intelligence arises leading to major algorithmic and training improvements and indisputably superhuman ASI in 2026.
This Fast scenario’s ASI may not be economically transformative by then, because human economies are slow to move. I wouldn’t bet on 2027 being anything like 2026 in such a scenario, though.
I do have faster scenarios in mind too, but far more speculative. E.g. ones in which the models we’re seeing now are already heavily sandbagging and actually superhuman, or in which other organizations have such models privately.
I see people say things like this, but I don’t understand it at all. The average adult human can do all sorts of things that current AIs are hopeless at, such as planning a weekend getaway. Have you, literally you personally today, automated 90% of the things you do at your computer? If current AI has better capabilities than the average adult human, shouldn’t it be able to do most of what you do? (Setting aside anything where you have special expertise, but we all spend big chunks of our day doing things where we don’t have special expertise – replying to routine emails, for instance.)
FWIW, I touched on this in a recent blog post: https://amistrongeryet.substack.com/p/speed-and-distance.
My description “better capabilities than average adult human in almost all respects”, differs from “would be capable of running most people’s lives better than they could”. You appear to be taking these as synonymous.
The economically useful question is more along the lines of “what fraction of time taken on tasks could a business expect to be able to delegate to these agents for free vs a median human that they have to employ at socially acceptable wages” (taking into account supervision needs and other overheads in each case).
My guess is currently “more than half, probably not yet 80%”. There are still plenty of tasks that a supervised 120 IQ human can do that current models can’t. I do not think there will remain many tasks that a 100 IQ human can do with supervision that a current AI model cannot with the same degree of supervision, after adjusting processes to suit the differing strengths and weakness of each.
Your test does not measure what you think it does. There are people smarter than me who I could not and would not trust to make decisions about me (or my computer) in my life. So no. (Also note, I am very much not of average capability, and likewise for most participants on LessWrong)
I am certain that you also would not take a random person in the world of median capability and get them to do 90% of the things you do with your computer for you, even for free. Not without a lot of screening and extensive training and probably not even then.
However, it would not take much better reliability for other people to create economically valuable niches for AIs with such capability. It would take quite a long time, but even with zero increases in capability I think AI would be eventually be a major economic factor replacing human labour. Not quite transformative, but close.