I think the “step-function” thing is more that he believes that these things will follow exponential/logistic curves, which can easily look like step-functions to humans, but are still perfectly lawful and continuous.
I think the “step-function” thing is more that he believes that these things will follow exponential/logistic curves, which can easily look like step-functions to humans, but are still perfectly lawful and continuous.
This is completely different from my Eliezer-model. My Eliezer-model says:
Every invention is a step change, if it involves a step where you go from ‘not knowing how to get the thing working’ to ‘knowing how to get the thing working’; or if it involves a step where the thing goes from ‘not working’ to ‘working’.
Physics is continuous, so there are always ways to look at step changes like these and find some continuous process underlying it, with the benefit of hindsight. But this isn’t very useful, because it’s not something we’re good at doing with foresight. (If it were, there would be someone in the world with a good empirical track record of timing when different inventions will happen years in advance.)
Macroeconomic growth isn’t a smooth curve because physics is continuous; nor is it a smooth curve because there’s some law of nature saying ‘everything has to be continuous’. Rather, it’s a smooth curve because it’s built out of inventions, trades, etc. that are individually low-impact, so no one event dominates. Zoom out on a large number of minor step changes and you get something that isn’t itself a step change.
The difference between AGI and nukes on the one hand, and most other inventions on the other hand, isn’t that AGI or nukes are ‘more step-function-y’ than Microsoft Word, the Wright Flyer, reggae music, etc. It’s that AGI and nukes are higher-impact than Microsoft Word, the Wright Flyer, etc., so the same zero-to-one leap that produces a small impact when it’s ‘you’ve finally gotten your word processor to start displaying text’, produces a large impact when it’s ‘you’ve finally gotten your AI to start modeling messy physical paths-through-time’.
And the reason inventing the first AGI or the first nuke has a larger impact than inventing the first reggae song, is because the physics of general intelligence and of nuclear chain reactions just entails they’re very powerful processes. If you invent a weapon that increases energy yield by 2x, then you’ll have a 2x-ish impact on the world; if you invent a weapon that increases energy yield by 1000x, then you’ll have a 1000x-ish impact on the world.
[...] Before anybody built the first critical nuclear pile in a squash court at the University of Chicago, was there a pile that was almost but not quite critical? Yes, one hour earlier. Did people already build nuclear systems and experiment with them? Yes, but they didn’t have much in the way of net power output. Did the Wright Brothers build prototypes before the Flyer? Yes, but they weren’t prototypes that flew but 80% slower.
I guarantee you that, whatever the fast takeoff scenario, there will be some way to look over the development history, and nod wisely and say, “Ah, yes, see, this was not unprecedented, here are these earlier systems which presaged the final system!” Maybe you could even look back to today and say that about GPT-3, yup, totally presaging stuff all over the place, great. But it isn’t transforming society because it’s not over the social-transformation threshold.
AlphaFold presaged AlphaFold 2 but AlphaFold 2 is good enough to start replacing other ways of determining protein conformations and AlphaFold is not; and then neither of those has much impacted the real world, because in the real world we can already design a vaccine in a day and the rest of the time is bureaucratic time rather than technology time, and that goes on until we have an AI over the threshold to bypass bureaucracy.
[...] There is not necessarily a presage of 9/11 where somebody flies a small plane into a building and kills 100 people, before anybody flies 4 big planes into 3 buildings and kills 3000 people; and even if there is some presaging event like that, which would not surprise me at all, the rest of the world’s response to the two cases was evidently discontinuous. You do not necessarily wake up to a news story that is 10% of the news story of 2001/09/11, one year before 2001/09/11, written in 10% of the font size on the front page of the paper.
Physics is continuous but it doesn’t always yield things that “look smooth to a human brain”. Some kinds of processes converge to continuity in strong ways where you can throw discontinuous things in them and they still end up continuous, which is among the reasons why I expect world GDP to stay on trend up until the world ends abruptly; because world GDP is one of those things that wants to stay on a track, and an AGI building a nanosystem can go off that track without being pushed back onto it.
I think the “step-function” thing is more that he believes that these things will follow exponential/logistic curves, which can easily look like step-functions to humans, but are still perfectly lawful and continuous.
This is completely different from my Eliezer-model. My Eliezer-model says:
Every invention is a step change, if it involves a step where you go from ‘not knowing how to get the thing working’ to ‘knowing how to get the thing working’; or if it involves a step where the thing goes from ‘not working’ to ‘working’.
Physics is continuous, so there are always ways to look at step changes like these and find some continuous process underlying it, with the benefit of hindsight. But this isn’t very useful, because it’s not something we’re good at doing with foresight. (If it were, there would be someone in the world with a good empirical track record of timing when different inventions will happen years in advance.)
Macroeconomic growth isn’t a smooth curve because physics is continuous; nor is it a smooth curve because there’s some law of nature saying ‘everything has to be continuous’. Rather, it’s a smooth curve because it’s built out of inventions, trades, etc. that are individually low-impact, so no one event dominates. Zoom out on a large number of minor step changes and you get something that isn’t itself a step change.
The difference between AGI and nukes on the one hand, and most other inventions on the other hand, isn’t that AGI or nukes are ‘more step-function-y’ than Microsoft Word, the Wright Flyer, reggae music, etc. It’s that AGI and nukes are higher-impact than Microsoft Word, the Wright Flyer, etc., so the same zero-to-one leap that produces a small impact when it’s ‘you’ve finally gotten your word processor to start displaying text’, produces a large impact when it’s ‘you’ve finally gotten your AI to start modeling messy physical paths-through-time’.
And the reason inventing the first AGI or the first nuke has a larger impact than inventing the first reggae song, is because the physics of general intelligence and of nuclear chain reactions just entails they’re very powerful processes. If you invent a weapon that increases energy yield by 2x, then you’ll have a 2x-ish impact on the world; if you invent a weapon that increases energy yield by 1000x, then you’ll have a 1000x-ish impact on the world.
Quoting EY: