I do roughly agree with your predictions, except that I rate the economic impact in general to be lower. Many headlines, much handwringing, but large changes won’t materialize in a way that matters.
To put my main objection succinctly, I simply don’t see why AGI would follow soon from your 2026 world. Can you walk me through it?
OK, well, you should retract your claim that the median LW timeline will soon start to clash with reality then! It sounds like you think reality will look basically as I predicted! (I can’t speak for all of LW of course but I actually have shorter timelines than the median LWer, I think.)
Re AGI happening in 2027 in my world: Yep good question. I wish I had had the nerve to publish my 2027 story. A thorough answer to your question will take hours (days?) to write, and so I beg pardon for instead giving this hasty and incomplete answer:
--For R&D, when I break down the process that happens at AI labs, the process that produces a steady stream of better algorithms, it sure seems like there are large chunks of that loop that can be automated by the kinds of coding-and-research-assistant-bots that exist by 2026 in my story. Plus a few wild cards besides, that could accelerate R&D still further. I actually think completely automating the process is likely, but even if that doesn’t happen, a substantial speedup would be enough to reach the next tier of improvements which would then get us to the tier after that etc. --For takeover, the story is similar. I think about what sorts of skills/abilities an AI would need to take over the world, e.g. it would need to be APS-AI as defined in the Carlsmith report on existential risk from power-seeking AI. Then I think about whether the chatbots of 2026 will have all of those skills, and it seems like the answer is yes. --Separately, I struggle to think of any important skill/ability that isn’t likely to happen by 2026 in this story. Long-horizon agency? True understanding? General reasoning ability? The strongest candidate is ability to control robots in messy real-world environments, but alas that’s not a blocker, even if AIs can’t do that, they can still accelerate R&D and take over the world.
What do you think the blockers are—the important skills/abilities that no AI will have by 2026?
OK, well, you should retract your claim that the median LW timeline will soon start to clash with reality then! It sounds like you think reality will look basically as I predicted! (I can’t speak for all of LW of course but I actually have shorter timelines than the median LWer, I think.)
I retract the claim in the sense that it was a vague statement that I didn’t expect to be taken literally, which I should have made clearer! But it’s you who operationalized “a few years” as 2026 and “the median less wrong view” as your view.
Anyway, I think I see the outline of our disagreement now, but it’s still kind of hard to pin down.
First, I don’t think that AIs will be put to unsupervised use in any domain where correctness matters, i.e., given fully automated access to valuable resources, like money or compute infrastructure. The algorithms that currently do this have a very constrained set of actions they can take (e.g. an AI chooses an ad to show out of a database of possible ads), and this will remain so.
Second, perhaps I didn’t make clear enough that I think all of the applications will remain in this twilight of almost working, showing some promise, etc, but not actually deployed (that’s what I meant by the economic impact remaining small). So, more thinkpieces about what could happen (with isolated, splashy examples), rather than things actually happening.
Third, I don’t think AIs will be capable of performing tasks that require long attention spans, or that trade off multiple complicated objectives against each other. With current technology, I see AIs constrained to be used for short, self-contained tasks only, with a separate session for each task.
I stand by my decision to operationalize “a few years” as 2026, and I stand by my decision to use my view as a proxy for the median LW view: since you were claiming that the median LW view was too short-timelinesy, and would soon clash with reality, and I have even shorter timelines than the median LW view and yet (you backtrack-claim) my view won’t soon clash with reality.
Thank you for the clarification of your predictions! It definitely helps, but unfortunately I predict that goalpost-moving will still be a problem. What counts as “domain where correctness matters?” What counts as “very constrained set of actions?” Would e.g. a language-model-based assistant that can browse the internet and buy things for you on Amazon (with your permission of course) be in line with what you expect, or violate your expectations?
What about the applications that I discuss in the story, e.g. the aforementioned smart buyer assistant, the video-game-companion-chatbot, etc.? Do they not count as fully working? Are you predicting that there’ll be prototypes but no such chatbot with more than, say, 100,000 daily paying users?
(Also, what about Copilot? Isn’t it already an example of an application that genuinely works, and isn’t just in the twilight zone?)
What counts as a long attention span? 1000 forward passes? A million? What counts as trading off multiple complicated objectives against each other, and why doesn’t ChatGPT already qualify?
Mmm, I would say the general shape of your view won’t clash with reality, but the magnitude of the impact will.
It’s plausible to me that a smart buyer will go and find the best deal for you when you tell it to buy laptop model X. It’s not plausible to me that you’ll be able to instruct it “buy an updated laptop for me whenever a new model comes out that is good value and sufficiently better than what I already have,” and then let it do its thing completely unsupervised (with direct access to your bank account). That’s what I mean by multiple complicated objectives.
What counts as “domain where correctness matters?” What counts as “very constrained set of actions?” Would e.g. a language-model-based assistant that can browse the internet and buy things for you on Amazon (with your permission of course) be in line with what you expect, or violate your expectations?
Something that goes beyond current widespread use of AI such as spam-filtering. Spam-filtering (or selecting ads on facebook, or flagging hate speech etc) is a domain where the AI is doing a huge number of identical tasks, and a certain % of wrong decisions is acceptable. One wrong decision won’t tank the business. Each copy of the task is done in an independent session (no memory).
An example application where that doesn’t hold is putting the AI in charge of ordering all the material inputs for your factory. Here, a single stupid mistake (didn’t buy something because the price will go down in the future, replaced one product with another, misinterpret seasonal cycles) will lead to a catastrophic stop of the entire operation.
(Also, what about Copilot? Isn’t it already an example of an application that genuinely works, and isn’t just in the twilight zone?)
Copilot is not autonomous. There’s a human tightly integrated into everything it’s doing. The jury is still out on if it works, i.e., do we have anything more than some programmers’ self reports to substantiate that it increases productivity? Even if it does work, it’s just a productivity tool for humans, not something that replaces humans at their tasks directly.
A distinction which makes no difference. Copilot-like models are already being used in autonomous code-writing ways, such as AlphaCode which executes generated code to check against test cases, or evolving code, or LaMDA calling out to a calculator to run expressions, or ChatGPT writing and then ‘executing’ its own code (or writing code like SVG which can be interpreted by the browser as an image), or Adept running large Transformers which generate & execute code in response to user commands, or the dozens of people hooking up the OA API to a shell, or… Tool AIs want to be agent AIs.
(Oh, also: When I wrote the 2026 story, I did it using my timelines which were something like median 2029. And I had trends to extrapolate, underlying models, etc. And also: Bio-anchors style models, when corrected to have better settings of the various inputs, yield something like 2029 median also. In fact that’s why my median was what it was. So I’d say that multiple lines of evidence are converging.)
My expectations are more focused around the parallel paths of Reflective General Reasoning and Recursive Self-Improvement. I think that both of these paths have thresholds beyond which there is a mode shift to a much faster (and accelerating) development pace, and that we are pretty close to both of these thresholds.
I do roughly agree with your predictions, except that I rate the economic impact in general to be lower. Many headlines, much handwringing, but large changes won’t materialize in a way that matters.
To put my main objection succinctly, I simply don’t see why AGI would follow soon from your 2026 world. Can you walk me through it?
OK, well, you should retract your claim that the median LW timeline will soon start to clash with reality then! It sounds like you think reality will look basically as I predicted! (I can’t speak for all of LW of course but I actually have shorter timelines than the median LWer, I think.)
Re AGI happening in 2027 in my world: Yep good question. I wish I had had the nerve to publish my 2027 story. A thorough answer to your question will take hours (days?) to write, and so I beg pardon for instead giving this hasty and incomplete answer:
--For R&D, when I break down the process that happens at AI labs, the process that produces a steady stream of better algorithms, it sure seems like there are large chunks of that loop that can be automated by the kinds of coding-and-research-assistant-bots that exist by 2026 in my story. Plus a few wild cards besides, that could accelerate R&D still further. I actually think completely automating the process is likely, but even if that doesn’t happen, a substantial speedup would be enough to reach the next tier of improvements which would then get us to the tier after that etc.
--For takeover, the story is similar. I think about what sorts of skills/abilities an AI would need to take over the world, e.g. it would need to be APS-AI as defined in the Carlsmith report on existential risk from power-seeking AI. Then I think about whether the chatbots of 2026 will have all of those skills, and it seems like the answer is yes.
--Separately, I struggle to think of any important skill/ability that isn’t likely to happen by 2026 in this story. Long-horizon agency? True understanding? General reasoning ability? The strongest candidate is ability to control robots in messy real-world environments, but alas that’s not a blocker, even if AIs can’t do that, they can still accelerate R&D and take over the world.
What do you think the blockers are—the important skills/abilities that no AI will have by 2026?
I retract the claim in the sense that it was a vague statement that I didn’t expect to be taken literally, which I should have made clearer! But it’s you who operationalized “a few years” as 2026 and “the median less wrong view” as your view.
Anyway, I think I see the outline of our disagreement now, but it’s still kind of hard to pin down.
First, I don’t think that AIs will be put to unsupervised use in any domain where correctness matters, i.e., given fully automated access to valuable resources, like money or compute infrastructure. The algorithms that currently do this have a very constrained set of actions they can take (e.g. an AI chooses an ad to show out of a database of possible ads), and this will remain so.
Second, perhaps I didn’t make clear enough that I think all of the applications will remain in this twilight of almost working, showing some promise, etc, but not actually deployed (that’s what I meant by the economic impact remaining small). So, more thinkpieces about what could happen (with isolated, splashy examples), rather than things actually happening.
Third, I don’t think AIs will be capable of performing tasks that require long attention spans, or that trade off multiple complicated objectives against each other. With current technology, I see AIs constrained to be used for short, self-contained tasks only, with a separate session for each task.
Does that make the disagreement clearer?
I stand by my decision to operationalize “a few years” as 2026, and I stand by my decision to use my view as a proxy for the median LW view: since you were claiming that the median LW view was too short-timelinesy, and would soon clash with reality, and I have even shorter timelines than the median LW view and yet (you backtrack-claim) my view won’t soon clash with reality.
Thank you for the clarification of your predictions! It definitely helps, but unfortunately I predict that goalpost-moving will still be a problem. What counts as “domain where correctness matters?” What counts as “very constrained set of actions?” Would e.g. a language-model-based assistant that can browse the internet and buy things for you on Amazon (with your permission of course) be in line with what you expect, or violate your expectations?
What about the applications that I discuss in the story, e.g. the aforementioned smart buyer assistant, the video-game-companion-chatbot, etc.? Do they not count as fully working? Are you predicting that there’ll be prototypes but no such chatbot with more than, say, 100,000 daily paying users?
(Also, what about Copilot? Isn’t it already an example of an application that genuinely works, and isn’t just in the twilight zone?)
What counts as a long attention span? 1000 forward passes? A million? What counts as trading off multiple complicated objectives against each other, and why doesn’t ChatGPT already qualify?
Mmm, I would say the general shape of your view won’t clash with reality, but the magnitude of the impact will.
It’s plausible to me that a smart buyer will go and find the best deal for you when you tell it to buy laptop model X. It’s not plausible to me that you’ll be able to instruct it “buy an updated laptop for me whenever a new model comes out that is good value and sufficiently better than what I already have,” and then let it do its thing completely unsupervised (with direct access to your bank account). That’s what I mean by multiple complicated objectives.
Something that goes beyond current widespread use of AI such as spam-filtering. Spam-filtering (or selecting ads on facebook, or flagging hate speech etc) is a domain where the AI is doing a huge number of identical tasks, and a certain % of wrong decisions is acceptable. One wrong decision won’t tank the business. Each copy of the task is done in an independent session (no memory).
An example application where that doesn’t hold is putting the AI in charge of ordering all the material inputs for your factory. Here, a single stupid mistake (didn’t buy something because the price will go down in the future, replaced one product with another, misinterpret seasonal cycles) will lead to a catastrophic stop of the entire operation.
Copilot is not autonomous. There’s a human tightly integrated into everything it’s doing. The jury is still out on if it works, i.e., do we have anything more than some programmers’ self reports to substantiate that it increases productivity? Even if it does work, it’s just a productivity tool for humans, not something that replaces humans at their tasks directly.
A distinction which makes no difference. Copilot-like models are already being used in autonomous code-writing ways, such as AlphaCode which executes generated code to check against test cases, or evolving code, or LaMDA calling out to a calculator to run expressions, or ChatGPT writing and then ‘executing’ its own code (or writing code like SVG which can be interpreted by the browser as an image), or Adept running large Transformers which generate & execute code in response to user commands, or the dozens of people hooking up the OA API to a shell, or… Tool AIs want to be agent AIs.
(Oh, also: When I wrote the 2026 story, I did it using my timelines which were something like median 2029. And I had trends to extrapolate, underlying models, etc. And also: Bio-anchors style models, when corrected to have better settings of the various inputs, yield something like 2029 median also. In fact that’s why my median was what it was. So I’d say that multiple lines of evidence are converging.)
My expectations are more focused around the parallel paths of Reflective General Reasoning and Recursive Self-Improvement. I think that both of these paths have thresholds beyond which there is a mode shift to a much faster (and accelerating) development pace, and that we are pretty close to both of these thresholds.