@Daniel Kokotajlo it looks like you expect 1000x-energy 4 years after 99%-automation. I thought we get fast takeoff, all humans die, and 99% automation at around the same time (but probably in that order) and then get massive improvements in technology and massive increases in energy use soon thereafter. What takes 4 years?
(I don’t think the part after fast takeoff or all humans dying is decision-relevant, but maybe resolving my confusion about this part of your model would help illuminate other confusions too.)
Good catch. Let me try to reconstruct my reasoning:
I was probably somewhat biased towards a longer gap because I knew I’d be discussing with Ege who is very skeptical (I think?) that even a million superintelligences in control of the entire human society could whip it into shape fast enough to grow 1000x in less than a decade. So I probably was biased towards ‘conservatism.’ (in scare quotes because the direction that is conservative vs. generous is determined by what other people think, not by the evidence and facts of the case)
As Habryka says, I think there’s a gap between 99% automatable and 99% automated. I think the gap between AI R&D being 99% automatable and being actually automated will be approximately one day, unless there is deliberate effort to slow down. But automating the world economy will take longer because there won’t be enough compute to replace all the jobs, many jobs will be ‘sticky’ and people won’t just be immediately laid off, many jobs are partially physical and thus would require robots to fully automate, robots which need to be manufactured, etc.
I also think there’s a gap between a fully automated economy and 1000x energy consumption. Napkin math: Say your nanobots / nanofactories / optimized robo-miner-factory-complexes are capable of reproducing themselves (doubling in size) every month. And say you start with 1000 tons worth of them, produced with various human tools in various human laboratories. Then a year later you’ll only have 4M tons, and a year after that 16B tons… it’ll take a while to overtake the human economy and then about a year after that you get to 1000x energy consumption. Is a one month doubling time reasonable estimate? I have no idea, I could imagine it being significantly faster but also somewhat slower. (Faster scenario: Nanobots/nanofactories that are like bacteria but better. Doubling times like one hour or so. Slower scenario: The tools to build nanobots/nanofactories don’t exist, so you need to build the tools to build the tools to build the tools to build them. And this just takes serial time; maybe each stage takes six months. Slower scenario: Nanobots etc. are possible but not with a doubling time measured in hours; in harsh environments like earth’s oceans and surfaces, doubling time even for the best nanobots is measured in weeks. Instead of “like bacteria but better,” it’s “like grass but better.” Even slower scenario: Nanobots/nanofactories just aren’t possible even for superintelligences except maybe if they are able to do massive experiments to search through the space of all possible designs or something like that. Which they aren’t. So they get by for now with ordinary robots digging and refining and manufacturing stuff, which has a doubling time of almost a year. “Like human industrial economy but better.” (Tesla factories produce about their weight in cars every year I think. Rough estimate, could be off by an OOM.)
I’d love to see more serious analysis along the lines I sketched above, of what the plausible fastest doubling times are and how long it might take for a million ASIs with obedient human nations to get there. My current views are very uncertain and unstable.
I’m curious what fraction-of-2023-tasks-automatable and maybe fraction-of-world-economy-automated you think will occur at e.g. overpower time, and the median year for that. (I sometimes notice people assuming 99%-automatability occurs before all the humans are dead, without realizing they’re assuming anything.)
(a) 99% remotable 2023 tasks automateable (the thing we forecast in the OP) (b) 99% 2023 tasks automatable (c) 99% 2023 tasks automated (d) Overpower ability
My best guess at the ordering will be a->d->b->c.
Rationale: Overpower ability probably requires something like a fully functioning general purpose agent capable of doing hardcore novel R&D. So, (a). However it probably doesn’t require sophisticated robots, of the sort you’d need to actually automate all 2023 tasks. It certainly doesn’t require actually having replaced all human jobs in the actual economy, though for strategic reasons a coalition of powerful misaligned AGIs would plausibly wait to kill the humans until they had actually rendered the humans unnecessary.
My best guess is that a, d, and b will all happen in the same year, possibly within the same month. c will probably take longer for reasons sketched above.
I think the gap between AI R&D being 99% automatable and being actually automated will be approximately one day
That’s wildly optimistic. There aren’t any businesses that can change anywhere near that fast.
Even if they genuinely wanted to, the laws 99% of business are governed by mean that they genuinely can’t do that. The absolute minimum time for such radical change under most jurisdictions is roughly six months.
Looking at the history of step changes in industry/business such as the industrial and information revolutions, I think the minimum plausible time between “can be automated with reasonable accuracy” and “is actually automated” is roughly a decade (give or take five years), because the humans who would be ‘replaced’ will not go gently.
That is far faster than either of the previous revolutions though, and a lot faster than the vast majority of people are capable of adapting. Which would lead to Interesting Times...
I think there is a significant societal difference, because that last step is a lot bigger than the ones before.
In general, businesses tend to try to reduce headcount as people retire or leave, even if it means some workers have very little to do. Redundancies are expensive and take a long time—the larger they are, the longer it takes.
Businesses are also primarily staffed and run by humans who do not wish to lose their own jobs.
For a real-world example of a task that is already >99% automatable, consider real estate conveyancing.
The actual transaction is already entirely automated via simple algorithms—the database of land ownership is updated indicating the new owner, and the figures representing monetary wealth are updated in two or more bank accounts.
The work prior to that consists of identity confirmation, and document comprehension to find and raise possible issues that the buyer and seller need to be informed about.
All of this is already reasonably practicable with existing LLMs and image matching.
Have any conveyancing solicitors replaced all of their staff thusly?
I think one component is that the prediction is for when 99% of jobs are automatable, not when they are automated (Daniel probably has more to say here, but this one clarification seems important).
@Daniel Kokotajlo it looks like you expect 1000x-energy 4 years after 99%-automation. I thought we get fast takeoff, all humans die, and 99% automation at around the same time (but probably in that order) and then get massive improvements in technology and massive increases in energy use soon thereafter. What takes 4 years?
(I don’t think the part after fast takeoff or all humans dying is decision-relevant, but maybe resolving my confusion about this part of your model would help illuminate other confusions too.)
Good catch. Let me try to reconstruct my reasoning:
I was probably somewhat biased towards a longer gap because I knew I’d be discussing with Ege who is very skeptical (I think?) that even a million superintelligences in control of the entire human society could whip it into shape fast enough to grow 1000x in less than a decade. So I probably was biased towards ‘conservatism.’ (in scare quotes because the direction that is conservative vs. generous is determined by what other people think, not by the evidence and facts of the case)
As Habryka says, I think there’s a gap between 99% automatable and 99% automated. I think the gap between AI R&D being 99% automatable and being actually automated will be approximately one day, unless there is deliberate effort to slow down. But automating the world economy will take longer because there won’t be enough compute to replace all the jobs, many jobs will be ‘sticky’ and people won’t just be immediately laid off, many jobs are partially physical and thus would require robots to fully automate, robots which need to be manufactured, etc.
I also think there’s a gap between a fully automated economy and 1000x energy consumption. Napkin math: Say your nanobots / nanofactories / optimized robo-miner-factory-complexes are capable of reproducing themselves (doubling in size) every month. And say you start with 1000 tons worth of them, produced with various human tools in various human laboratories. Then a year later you’ll only have 4M tons, and a year after that 16B tons… it’ll take a while to overtake the human economy and then about a year after that you get to 1000x energy consumption. Is a one month doubling time reasonable estimate? I have no idea, I could imagine it being significantly faster but also somewhat slower. (Faster scenario: Nanobots/nanofactories that are like bacteria but better. Doubling times like one hour or so. Slower scenario: The tools to build nanobots/nanofactories don’t exist, so you need to build the tools to build the tools to build the tools to build them. And this just takes serial time; maybe each stage takes six months. Slower scenario: Nanobots etc. are possible but not with a doubling time measured in hours; in harsh environments like earth’s oceans and surfaces, doubling time even for the best nanobots is measured in weeks. Instead of “like bacteria but better,” it’s “like grass but better.” Even slower scenario: Nanobots/nanofactories just aren’t possible even for superintelligences except maybe if they are able to do massive experiments to search through the space of all possible designs or something like that. Which they aren’t. So they get by for now with ordinary robots digging and refining and manufacturing stuff, which has a doubling time of almost a year. “Like human industrial economy but better.” (Tesla factories produce about their weight in cars every year I think. Rough estimate, could be off by an OOM.)
I’d love to see more serious analysis along the lines I sketched above, of what the plausible fastest doubling times are and how long it might take for a million ASIs with obedient human nations to get there. My current views are very uncertain and unstable.
Thanks!
I’m curious what fraction-of-2023-tasks-automatable and maybe fraction-of-world-economy-automated you think will occur at e.g. overpower time, and the median year for that. (I sometimes notice people assuming 99%-automatability occurs before all the humans are dead, without realizing they’re assuming anything.)
Distinguishing:
(a) 99% remotable 2023 tasks automateable (the thing we forecast in the OP)
(b) 99% 2023 tasks automatable
(c) 99% 2023 tasks automated
(d) Overpower ability
My best guess at the ordering will be a->d->b->c.
Rationale: Overpower ability probably requires something like a fully functioning general purpose agent capable of doing hardcore novel R&D. So, (a). However it probably doesn’t require sophisticated robots, of the sort you’d need to actually automate all 2023 tasks. It certainly doesn’t require actually having replaced all human jobs in the actual economy, though for strategic reasons a coalition of powerful misaligned AGIs would plausibly wait to kill the humans until they had actually rendered the humans unnecessary.
My best guess is that a, d, and b will all happen in the same year, possibly within the same month. c will probably take longer for reasons sketched above.
That’s wildly optimistic. There aren’t any businesses that can change anywhere near that fast.
Even if they genuinely wanted to, the laws 99% of business are governed by mean that they genuinely can’t do that. The absolute minimum time for such radical change under most jurisdictions is roughly six months.
Looking at the history of step changes in industry/business such as the industrial and information revolutions, I think the minimum plausible time between “can be automated with reasonable accuracy” and “is actually automated” is roughly a decade (give or take five years), because the humans who would be ‘replaced’ will not go gently.
That is far faster than either of the previous revolutions though, and a lot faster than the vast majority of people are capable of adapting. Which would lead to Interesting Times...
The idea is R&D will already be partially automated before hitting the 99% mark, so 99% marks the end of a gradual shift towards automation.
I think there is a significant societal difference, because that last step is a lot bigger than the ones before.
In general, businesses tend to try to reduce headcount as people retire or leave, even if it means some workers have very little to do. Redundancies are expensive and take a long time—the larger they are, the longer it takes.
Businesses are also primarily staffed and run by humans who do not wish to lose their own jobs.
For a real-world example of a task that is already >99% automatable, consider real estate conveyancing.
The actual transaction is already entirely automated via simple algorithms—the database of land ownership is updated indicating the new owner, and the figures representing monetary wealth are updated in two or more bank accounts.
The work prior to that consists of identity confirmation, and document comprehension to find and raise possible issues that the buyer and seller need to be informed about.
All of this is already reasonably practicable with existing LLMs and image matching.
Have any conveyancing solicitors replaced all of their staff thusly?
Keep in mind Daniel said AI R&D
I think one component is that the prediction is for when 99% of jobs are automatable, not when they are automated (Daniel probably has more to say here, but this one clarification seems important).