Since there are some aspects of your version of “slow” takeoff that have been traditionally associated with the “fast” view, (e.g. rapid transformation of society), I’m tempted to try to tease apart what we care about in the slow vs fast discussion into multiple questions:
1) How much of a lead will one team need to have over others in order to have a decisive strategic advantage...
a) in wall clock time?
b) in economic doublings time?
2) How much time do we have to solve hard alignment problems...
a) in wall clock time?
b) in economic doublings time?
It seems that one could plausibly hold a “slow” view for some of these questions and a “fast” view for others.
For example, maybe Alice thinks that GDP growth will accelerate, and that there are gains to scale/centralization for AI projects, but that it will be relatively easy for human operators to keep AI systems under control. We could characterize her view as “fast” on 1a and 2a (because everything is happening so fast in wall clock time), “fast” on 1b (because of first-mover advantage), and “slow” on 2b (because we’ll have lots of time to figure things out).
In contrast, maybe Bob thinks GDP growth won’t accelerate that much until the time that the first AI system crosses some universality threshold, at which point it will rapidly spiral out of control, and that this all will take place many decades from now. We might call his view “fast” on 1a and 1b, and “slow” on 2a and 2b.
Curious to hear thoughts on this framework and whether there’s a better one.
I agree with the usefulness of splitting (1) vs. (2).
Question (2) is really a family of questions (how much time between <eventA> and needing to achieve <taskB> to avert doom?) And seems worth splitting those up too.
I also agree with splitting subjective vs. wall clock time, those are probably the ways I’d split it.
Since there are some aspects of your version of “slow” takeoff that have been traditionally associated with the “fast” view, (e.g. rapid transformation of society), I’m tempted to try to tease apart what we care about in the slow vs fast discussion into multiple questions:
1) How much of a lead will one team need to have over others in order to have a decisive strategic advantage...
a) in wall clock time?
b) in economic doublings time?
2) How much time do we have to solve hard alignment problems...
a) in wall clock time?
b) in economic doublings time?
It seems that one could plausibly hold a “slow” view for some of these questions and a “fast” view for others.
For example, maybe Alice thinks that GDP growth will accelerate, and that there are gains to scale/centralization for AI projects, but that it will be relatively easy for human operators to keep AI systems under control. We could characterize her view as “fast” on 1a and 2a (because everything is happening so fast in wall clock time), “fast” on 1b (because of first-mover advantage), and “slow” on 2b (because we’ll have lots of time to figure things out).
In contrast, maybe Bob thinks GDP growth won’t accelerate that much until the time that the first AI system crosses some universality threshold, at which point it will rapidly spiral out of control, and that this all will take place many decades from now. We might call his view “fast” on 1a and 1b, and “slow” on 2a and 2b.
Curious to hear thoughts on this framework and whether there’s a better one.
I agree with the usefulness of splitting (1) vs. (2).
Question (2) is really a family of questions (how much time between <eventA> and needing to achieve <taskB> to avert doom?) And seems worth splitting those up too.
I also agree with splitting subjective vs. wall clock time, those are probably the ways I’d split it.