This post considers three different kinds of “discontinuity” that we might imagine with AI development. First, there could be a sharp change in progress or the rate of progress that breaks with the previous trendline (this is the sort of thing <@examined@>(@Discontinuous progress in history: an update@) by AI Impacts). Second, the rate of progress could either be slow or fast, regardless of whether there is a discontinuity in it. Finally, the calendar time could either be short or long, regardless of the rate of progress.
The post then applies these categories to three questions. Will we see AGI coming before it arrives? Will we be able to “course correct” if there are problems? Is it likely that a single actor obtains a decisive strategic advantage?
Planned summary for the Alignment Newsletter: