The idea that moving faster now will reduce speed later is a bit counterintuitive. Here’s a drawing illustrating the idea:
One minor nitpick. Slow takeoff implies shorter timelines, so B should reach the top of the capabilities axis at a later point in time than A.
If you want to think intuitively about why this is true, consider that in our current world (slow takeoff) tools exist (like codex,GPT-4) that accelerate AI research. If we simply waited to deploy any AI tools until we could build fully super-intelligent AGI, we would have more time overall.
Now, it might still be the case that “Time during takeoff is more valuable than time before, so it’s worth trading time now for time later”, but it’s still wrong to depict the graphs as taking the same amount of time to reach a certain level of capabilities.
This is helpful to think about when considering policies like the pause, as we are gaining some amount of time at the current level of AI development and sacrificing some (smaller) amount of time at a higher level of development. Even assuming no race dynamics, determining whether a pause is beneficial depends on the ratio of the time-now/time-later and the relative value of those types of time.
A reasonable guess is that algorithmic Improvements matter about as much as Moore’s Law, so effectively we are trading 6 months now for 3 months later.
Another important fact is that in the world of fast takeoff/pause, we arrive at the “dangerous capabilities level” with less total knowledge. If we assume some fungibility of AI capabilities/safety research, then all of the foregone algorithmic improvement means we have also foregone some safety related knowledge.
One minor nitpick. Slow takeoff implies shorter timelines, so B should reach the top of the capabilities axis at a later point in time than A.
If you want to think intuitively about why this is true, consider that in our current world (slow takeoff) tools exist (like codex,GPT-4) that accelerate AI research. If we simply waited to deploy any AI tools until we could build fully super-intelligent AGI, we would have more time overall.
Now, it might still be the case that “Time during takeoff is more valuable than time before, so it’s worth trading time now for time later”, but it’s still wrong to depict the graphs as taking the same amount of time to reach a certain level of capabilities.
This is helpful to think about when considering policies like the pause, as we are gaining some amount of time at the current level of AI development and sacrificing some (smaller) amount of time at a higher level of development. Even assuming no race dynamics, determining whether a pause is beneficial depends on the ratio of the time-now/time-later and the relative value of those types of time.
A reasonable guess is that algorithmic Improvements matter about as much as Moore’s Law, so effectively we are trading 6 months now for 3 months later.
Another important fact is that in the world of fast takeoff/pause, we arrive at the “dangerous capabilities level” with less total knowledge. If we assume some fungibility of AI capabilities/safety research, then all of the foregone algorithmic improvement means we have also foregone some safety related knowledge.