On my version of the “continuous view”, the Technology X story seems plausible, but it starts with a shitty version of Technology X that doesn’t immediately produce billions of dollars of impact (or something similar, e.g. killing all humans), that then improves faster than the existing technology, such that an outside observer looking at both technologies could use trend extrapolation to predict that Technology X would be the one to reach TAI.
(And you can make this prediction at least, say, 3 years in advance of TAI, i.e. Technology X isn’t going to be accelerating so fast that you have zero time to react.)
Imagine that we lived in a universe in which it was plausible that the LHC creates a black hole or causes false vacuum collapse. It seems to me that such a universe could still have a techno-economic trajectory broadly similar to our own, for the same reasons. So, in that universe, would it make sense to argue “the LHC cannot destroy the world because its cost is an insufficient fraction of world GDP[1]”? It seems to me it would be strange there in a similar way how the economic argument about AI is strange here.
The “continuous view” argument is about takeoff speeds, not about AI risk?
If AI risk arose from narrow systems that couldn’t produce a billion dollars of value then I’d expect that risk could arise more discontinuously from a new paradigm. But AI risk arises from systems that are sufficiently intelligent that they could produce billions of dollars of value.
On my version of the “continuous view”, the Technology X story seems plausible, but it starts with a shitty version of Technology X that doesn’t immediately produce billions of dollars of impact (or something similar, e.g. killing all humans), that then improves faster than the existing technology, such that an outside observer looking at both technologies could use trend extrapolation to predict that Technology X would be the one to reach TAI.
(And you can make this prediction at least, say, 3 years in advance of TAI, i.e. Technology X isn’t going to be accelerating so fast that you have zero time to react.)
Yes, this is something I discuss in the edit (you probably started typing your reply before I posted it).
The “continuous view” argument is about takeoff speeds, not about AI risk?
If AI risk arose from narrow systems that couldn’t produce a billion dollars of value then I’d expect that risk could arise more discontinuously from a new paradigm. But AI risk arises from systems that are sufficiently intelligent that they could produce billions of dollars of value.