I’m saying that faster AI progress now tends to lead to slower AI progress later.
My best guess is that this is true, but I think there are outside-view reasons to be cautious.
We have some preliminary, unpublished work[1] at AI Impacts trying to distinguish between two kinds of progress dynamics for technology:
There’s an underlying progress trend, which only depends on time, and the technologies we see are sampled from a distribution that evolves according to this trend. A simple version of this might be that the goodness G we see for AI at time t is drawn from a normal distribution centered on Gc(t) = G0exp(At). This means that, apart from how it affects our estimate for G0, A, and the width of the distribution, our best guess for what we’ll see in the non-immediate future does not depend on what we see now.
There’s no underlying trend “guiding” progress. Advances happen at random times and improve the goodness by random amounts. A simple version of this might be a small probability per day that an advancement occurs, which is then independently sampled from a distribution of sizes. The main distinction here is that seeing a large advance at time t0 does decrease our estimate for the time at which enough advances have accumulated to reach goodness level G_agi.
(A third hypothesis, of slightly lower crudeness level, is that advances are drawn without replacement from a population. Maybe the probability per time depends on the size of remaining population. This is closer to my best guess at how the world actually works, but we were trying to model progress in data that was not slowing down, so we didn’t look at this.)
Obviously neither of these models describes reality, but we might be able to find evidence about which one is less of a departure from reality.
When we looked at data for advances in AI and other technologies, we did not find evidence that the fractional size of advance was independent of time since the start of the trend or since the last advance. In other words, in seems to be the case that a large advance at time t0 has no effect on the (fractional) rate of progress at later times.
Some caveats:
This work is super preliminary, our dataset is limited in size and probably incomplete, and we did not do any remotely rigorous statistics.
This was motivated by progress trends that mostly tracked an exponential, so progress that approaches the inflection point of an S-cure might behave differently
These hypotheses were not chosen in any way more principled than “it seems like many people have implicit models like this” and “this seems relatively easy to check, given the data we have”
Also, I asked Bing Chat about this yesterday and it gave me some economics papers that, at a glance, seem much better than what I’ve been able to find previously. So my views on this might change.
It’s unpublished because it’s super preliminary and I haven’t been putting more work into it because my impression was that this wasn’t cruxy enough to be worth the effort. I’d be interested to know if this seems important to others.
My best guess is that this is true, but I think there are outside-view reasons to be cautious.
We have some preliminary, unpublished work[1] at AI Impacts trying to distinguish between two kinds of progress dynamics for technology:
There’s an underlying progress trend, which only depends on time, and the technologies we see are sampled from a distribution that evolves according to this trend. A simple version of this might be that the goodness G we see for AI at time t is drawn from a normal distribution centered on Gc(t) = G0exp(At). This means that, apart from how it affects our estimate for G0, A, and the width of the distribution, our best guess for what we’ll see in the non-immediate future does not depend on what we see now.
There’s no underlying trend “guiding” progress. Advances happen at random times and improve the goodness by random amounts. A simple version of this might be a small probability per day that an advancement occurs, which is then independently sampled from a distribution of sizes. The main distinction here is that seeing a large advance at time t0 does decrease our estimate for the time at which enough advances have accumulated to reach goodness level G_agi.
(A third hypothesis, of slightly lower crudeness level, is that advances are drawn without replacement from a population. Maybe the probability per time depends on the size of remaining population. This is closer to my best guess at how the world actually works, but we were trying to model progress in data that was not slowing down, so we didn’t look at this.)
Obviously neither of these models describes reality, but we might be able to find evidence about which one is less of a departure from reality.
When we looked at data for advances in AI and other technologies, we did not find evidence that the fractional size of advance was independent of time since the start of the trend or since the last advance. In other words, in seems to be the case that a large advance at time t0 has no effect on the (fractional) rate of progress at later times.
Some caveats:
This work is super preliminary, our dataset is limited in size and probably incomplete, and we did not do any remotely rigorous statistics.
This was motivated by progress trends that mostly tracked an exponential, so progress that approaches the inflection point of an S-cure might behave differently
These hypotheses were not chosen in any way more principled than “it seems like many people have implicit models like this” and “this seems relatively easy to check, given the data we have”
Also, I asked Bing Chat about this yesterday and it gave me some economics papers that, at a glance, seem much better than what I’ve been able to find previously. So my views on this might change.
It’s unpublished because it’s super preliminary and I haven’t been putting more work into it because my impression was that this wasn’t cruxy enough to be worth the effort. I’d be interested to know if this seems important to others.