Economic takeoff because Paul’s model implies rapid and transformative economic growth prior to the point at which AIs can just take over completely. Whereas Eliezer’s model is that rapid economic growth prior to takeover is not particularly necessary—a sufficiently capable AI could act quickly or amass resources while keeping a low profile, such that from the perspective of almost all humanity, takeover is extremely sudden.
Note: “atomic” here doesn’t necessarily mean “nanobots”—the goal of the term is to connote that an AI does something physically transformative, e.g. releasing a super virus, hacking / melting all uncontrolled GPUs, constructing a Dyson sphere, etc. A distinguishing feature of Eliezer’s model is that those kinds of things could happen prior to the underlying AI capabilities that enable them having more widespread economic effects.
IIUC, both Eliezer and Paul agree that you get atomic takeoff of some kind eventually, soone of the main disagreements between Paul and Eliezer could be framed as their answer to the question: “Will economic takeoff precede atomic takeoff?” (Paul says probably yes, Eliezer says maybe.)
Separately, an issue I have with smooth / gradual vs. sharp / abrupt (the current top-voted terms) is that they’ve become a bit overloaded and conflated with a bunch of stuff related to recent AI progress, namely scaling laws and incremental / iterative improvements to chatbots and agents. IMO, these aren’t actually closely related nor particularly suggestive of Christiano-style takeoff—if anything it seems more like the opposite:
Scaling laws and the current pace of algorithmic improvement imply that labs can continue improving the underlying cognitive abilities of AI systems faster than those systems can actually be deployed into the world to generate useful economic growth. e.g. o1 is already “PhD level” in many domains, but doesn’t seem to be on pace to replace a significant amount of human labor or knowledge work before it is obsoleted by Opus 3.5 or whatever.
Smooth scaling of underlying cognition doesn’t imply smooth takeoff. Predictable, steady improvements on a benchmark via larger models or more compute don’t tell you which point on the graph you get something economically or technologically transformative.
For specifically discussing the takeoff models in the original Yudkowsky / Christiano discussion, what about:
Economic vs. atomic takeoff
Economic takeoff because Paul’s model implies rapid and transformative economic growth prior to the point at which AIs can just take over completely. Whereas Eliezer’s model is that rapid economic growth prior to takeover is not particularly necessary—a sufficiently capable AI could act quickly or amass resources while keeping a low profile, such that from the perspective of almost all humanity, takeover is extremely sudden.
Note: “atomic” here doesn’t necessarily mean “nanobots”—the goal of the term is to connote that an AI does something physically transformative, e.g. releasing a super virus, hacking / melting all uncontrolled GPUs, constructing a Dyson sphere, etc. A distinguishing feature of Eliezer’s model is that those kinds of things could happen prior to the underlying AI capabilities that enable them having more widespread economic effects.
IIUC, both Eliezer and Paul agree that you get atomic takeoff of some kind eventually, so one of the main disagreements between Paul and Eliezer could be framed as their answer to the question: “Will economic takeoff precede atomic takeoff?” (Paul says probably yes, Eliezer says maybe.)
Separately, an issue I have with smooth / gradual vs. sharp / abrupt (the current top-voted terms) is that they’ve become a bit overloaded and conflated with a bunch of stuff related to recent AI progress, namely scaling laws and incremental / iterative improvements to chatbots and agents. IMO, these aren’t actually closely related nor particularly suggestive of Christiano-style takeoff—if anything it seems more like the opposite:
Scaling laws and the current pace of algorithmic improvement imply that labs can continue improving the underlying cognitive abilities of AI systems faster than those systems can actually be deployed into the world to generate useful economic growth. e.g. o1 is already “PhD level” in many domains, but doesn’t seem to be on pace to replace a significant amount of human labor or knowledge work before it is obsoleted by Opus 3.5 or whatever.
Smooth scaling of underlying cognition doesn’t imply smooth takeoff. Predictable, steady improvements on a benchmark via larger models or more compute don’t tell you which point on the graph you get something economically or technologically transformative.