1. Everyone agrees that if we have less than 10 years left before the end, it’s probably not going to look like the multi-year, gradual, distributed takeoff Paul prophecies, and instead will look crazier, faster, more discontinuous, more Yudkowskian… right? In other words, everyone agrees <10-year timelines and Paul-slow takeoff are in tension with each other.*
2. Assuming we agree on 1, I’d be interested to hear whether people think we should resolve this tension by having low credence in <10 year timelines, or not having low credence in Yudkowskian takeoff speeds. My guess is that Ajeya and Paul do the former? I myself do the latter, because the arguments and intuitions about timelines seem more solid than the arguments and intuitions about takeoff speeds.
*For reasons like: <10 years seems like not enough time for the AI industry to mature and scale up so much that additional zeros can’t be quickly added to the parameters of the best AIs at any given time; it also seems like not enough time for GWP to double in four years before the end...
EDIT to clarify: I know that e.g. Ajeya has low credence in <10 year AI doom scenarios. My question for her would be, is this partially based on being somewhat convinced in slow takeoff and updating against <10 year scenarios as a result? The report updates against low-compute-requirements somewhat based on EMH-like considerations; is that the extent of the influence of this sort of thing on Ajeya’s timelines, or e.g. is Ajeya also putting less weight on short-horizon and lifetime anchors due to them seeming inconsistent with slow takeoff?
I still expect things to be significantly more gradual than Eliezer, in the 10 year world I think it will be very fast but we still have much tighter bounds on how fast (maybe median is more like a year and very likely 2+ months). But yes, the timeline will be much shorter than my default expectation, and then you also won’t have time for big broad impacts.
I don’t think you should have super low credence in fast takeoff. I gave 30% in the article that started this off, and I’m still somewhere in that ballpark.
Perhaps you think this implies a “low credence” in <10 year timelines. But I don’t really think the arguments about timelines are “solid” to the tune of 20%+ probability in 10 years.
Thanks! Wow I missed/forgot that 30% figure, my bad. I disagree with you much less than I thought! (I’m more like 70% instead of 30%). [ETA: Update: I’m going with the intuitive definition of takeoff speeds here, not the “doubling in 4 years before 1 year?” one. For my thoughts on how to define takeoff speeds, see here. If GWP doubling times is the definition we go with then I’m more like 85% fast takeoff I think, for reasons mentioned by Rob Bensinger below.]
My Eliezer-model thinks that “there will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles” is far less than 30% likely, because it’s so conjunctive:
It requires that there ever be a one-year interval in which the world output doubles.
It requires that there be a preceding four-year interval in which world output doubles.
So, it requires that the facts of CS be such that we can realistically get AI tech that capable before the world ends...
… and separately, that this capability not accelerate us to superintelligent AI in under four years...
… and separately, that ASI timelines be inherently long enough that we don’t incidentally get ASI within four years anyway.
Separately, it requires that individual humans make the basic-AI-research decisions to develop that tech before we achieve ASI. (Which may involve exercising technological foresight, making risky bets, etc.)
Separately, it requires that individual humans leverage that tech to intelligently try to realize a wide variety of large economic gains, before we achieve ASI. (Which may involve exercising technological, business, and social foresight, making risky bets, etc.)
Separately, it requires that the regulatory environment be favorable.
(Possibly other assumptions are required here too, like ‘the first groups that get this pre-AGI tech even care about transforming the world economy, vs. preferring to focus on more basic research, or alignment / preparation-for-AGI, etc.’)
You could try to get multiple of those properties at once by assuming specific things about the world’s overall adequacy and/or about the space of all reachable intelligent systems; but from Eliezer’s perspective these views fall somewhere on the spectrum between ‘unsupported speculation’ and ‘flatly contradicted by our observations so far’, and there are many ways to try to tweak civilization to be more adequate and/or the background CS facts to be more continuous, and still not hit the narrow target “a complete 4 year interval in which world output doubles” (before AGI destroys the world or a pivotal act occurs).
(I’m probably getting a bunch of details about Eliezer’s actual model wrong above, but my prediction is that his answer will at least roughly look like this.)
Update: I’m going with the intuitive definition of takeoff speeds here, not the “doubling in 4 years before 1 year?” one. For my thoughts on how to define takeoff speeds, see here. If GWP doubling times is the definition we go with then I’m more like 85% fast takeoff I think, for reasons mentioned by Rob Bensinger below.
1. Everyone agrees that if we have less than 10 years left before the end, it’s probably not going to look like the multi-year, gradual, distributed takeoff Paul prophecies, and instead will look crazier, faster, more discontinuous, more Yudkowskian… right? In other words, everyone agrees <10-year timelines and Paul-slow takeoff are in tension with each other.*
2. Assuming we agree on 1, I’d be interested to hear whether people think we should resolve this tension by having low credence in <10 year timelines, or not having low credence in Yudkowskian takeoff speeds. My guess is that Ajeya and Paul do the former? I myself do the latter, because the arguments and intuitions about timelines seem more solid than the arguments and intuitions about takeoff speeds.
*For reasons like: <10 years seems like not enough time for the AI industry to mature and scale up so much that additional zeros can’t be quickly added to the parameters of the best AIs at any given time; it also seems like not enough time for GWP to double in four years before the end...
EDIT to clarify: I know that e.g. Ajeya has low credence in <10 year AI doom scenarios. My question for her would be, is this partially based on being somewhat convinced in slow takeoff and updating against <10 year scenarios as a result? The report updates against low-compute-requirements somewhat based on EMH-like considerations; is that the extent of the influence of this sort of thing on Ajeya’s timelines, or e.g. is Ajeya also putting less weight on short-horizon and lifetime anchors due to them seeming inconsistent with slow takeoff?
I still expect things to be significantly more gradual than Eliezer, in the 10 year world I think it will be very fast but we still have much tighter bounds on how fast (maybe median is more like a year and very likely 2+ months). But yes, the timeline will be much shorter than my default expectation, and then you also won’t have time for big broad impacts.
I don’t think you should have super low credence in fast takeoff. I gave 30% in the article that started this off, and I’m still somewhere in that ballpark.
Perhaps you think this implies a “low credence” in <10 year timelines. But I don’t really think the arguments about timelines are “solid” to the tune of 20%+ probability in 10 years.
Thanks! Wow I missed/forgot that 30% figure, my bad. I disagree with you much less than I thought! (I’m more like 70% instead of 30%). [ETA: Update: I’m going with the intuitive definition of takeoff speeds here, not the “doubling in 4 years before 1 year?” one. For my thoughts on how to define takeoff speeds, see here. If GWP doubling times is the definition we go with then I’m more like 85% fast takeoff I think, for reasons mentioned by Rob Bensinger below.]
So here y’all have given your sense of the likelihoods as follows:
Paul: 70% soft takeoff, 30% hard takeoff
Daniel: 30% soft takeoff, 70% hard takeoff
How would Eliezer’s position be stated in these terms? Similar to Daniel’s?
My Eliezer-model thinks that “there will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles” is far less than 30% likely, because it’s so conjunctive:
It requires that there ever be a one-year interval in which the world output doubles.
It requires that there be a preceding four-year interval in which world output doubles.
So, it requires that the facts of CS be such that we can realistically get AI tech that capable before the world ends...
… and separately, that this capability not accelerate us to superintelligent AI in under four years...
… and separately, that ASI timelines be inherently long enough that we don’t incidentally get ASI within four years anyway.
Separately, it requires that individual humans make the basic-AI-research decisions to develop that tech before we achieve ASI. (Which may involve exercising technological foresight, making risky bets, etc.)
Separately, it requires that individual humans leverage that tech to intelligently try to realize a wide variety of large economic gains, before we achieve ASI. (Which may involve exercising technological, business, and social foresight, making risky bets, etc.)
Separately, it requires that the regulatory environment be favorable.
(Possibly other assumptions are required here too, like ‘the first groups that get this pre-AGI tech even care about transforming the world economy, vs. preferring to focus on more basic research, or alignment / preparation-for-AGI, etc.’)
You could try to get multiple of those properties at once by assuming specific things about the world’s overall adequacy and/or about the space of all reachable intelligent systems; but from Eliezer’s perspective these views fall somewhere on the spectrum between ‘unsupported speculation’ and ‘flatly contradicted by our observations so far’, and there are many ways to try to tweak civilization to be more adequate and/or the background CS facts to be more continuous, and still not hit the narrow target “a complete 4 year interval in which world output doubles” (before AGI destroys the world or a pivotal act occurs).
(I’m probably getting a bunch of details about Eliezer’s actual model wrong above, but my prediction is that his answer will at least roughly look like this.)
Affirmed.
Well said! This resonates with my Eliezer-model too.
Taking this into account I’d update my guess of Eliezer’s position to:
Eliezer: 5% soft takeoff, 80% hard takeoff, 15% something else
This last “something else” bucket added because “the Future is notoriously difficult to predict” (paraphrasing Eliezer).
Update: I’m going with the intuitive definition of takeoff speeds here, not the “doubling in 4 years before 1 year?” one. For my thoughts on how to define takeoff speeds, see here. If GWP doubling times is the definition we go with then I’m more like 85% fast takeoff I think, for reasons mentioned by Rob Bensinger below.