A lot of models of what can or can’t work in AI alignment depends on intuitions about whether to expect “true discontinuities” or just “steep bits”.
Note that Nate and Eliezer expect there to be some curves you can draw after-the-fact that shows continuity in AGI progress on particular dimensions. They just don’t expect these to be the curves with the most practical impact (and they don’t think we can identify the curves with foresight, in 2022, to make strong predictions about AGI timing or rates of progress).
On my model, the key point is not ‘some AI systems will undergo discontinuous leaps in their intelligence as they learn,’ but rather, ‘different people will try to build AI systems in different ways, and each will have some path of construction and some path of learning that can be modeled relatively well by some curve, and some of those curves will be very, very steep early on (e.g., when the system is first coming online, in the same way that the curve “how good is Google’s search engine” was super steep in the region between “it doesn’t work” and “it works at least a little”), and sometimes a new system will blow past the entire edifice of human knowledge in an afternoon shortly after it finishes coming online.’ Like, no one is saying that Alpha Zero had massive discontinuities in its learning curve, but it also wasn’t just AlphaGo Lee Sedol but with marginally more training: the architecture was pulled apart, restructured, and put back together, and the reassembled system was on a qualitatively steeper learning curve.
My point here isn’t to throw ‘AGI will undergo discontinuous leaps as they learn’ under the bus. Self-rewriting systems likely will (on my models) gain intelligence in leaps and bounds. What I’m trying to say is that I don’t think this disagreement is the central disagreement. I think the key disagreement is instead about where the main force of improvement in early human-designed AGI systems comes from — is it from existing systems progressing up their improvement curves, or from new systems coming online on qualitatively steeper improvement curves?
if the future goes the way I predict and yet anybody somehow survives, perhaps somebody will draw a hyperbolic trendline on some particular chart where the trendline is retroactively fitted to events including those that occurred in only the last 3 years, and say with a great sage nod, ah, yes, that was all according to trend, nor did anything depart from trend
There is, I think, a really basic difference of thinking here, which is that on my view, AGI erupting is just a Thing That Happens and not part of a Historical Worldview or a Great Trend.
Human intelligence wasn’t part of a grand story reflected in all parts of the ecology, it just happened in a particular species.
Now afterwards, of course, you can go back and draw all kinds of Grand Trends into which this Thing Happening was perfectly and beautifully fitted, and yet, it does not seem to me that people have a very good track record of thereby predicting in advance what surprising news story they will see next—with some rare, narrow-superforecasting-technique exceptions, like the Things chart on a steady graph and we know solidly what a threshold on that graph corresponds to and that threshold is not too far away compared to the previous length of the chart.
One day the Wright Flyer flew. Anybody in the future with benefit of hindsight, who wanted to, could fit that into a grand story about flying, industry, travel, technology, whatever; if they’ve been on the ground at the time, they would not have thereby had much luck predicting the Wright Flyer. It can be fit into a grand story but on the ground it’s just a thing that happened. It had some prior causes but it was not thereby constrained to fit into a storyline in which it was the plot climax of those prior causes.
My worldview sure does permit there to be predecessor technologies and for them to have some kind of impact and for some company to make a profit, but it is not nearly as interested in that stuff, on a very basic level, because it does not think that the AGI Thing Happening is the plot climax of a story about the Previous Stuff Happening.
I think the Hansonian viewpoint—which I consider another gradualist viewpoint, and whose effects were influential on early EA and which I think are still lingering around in EA—seemed surprised by AlphaGo and Alpha Zero, when you contrast its actual advance language with what actually happened. Inevitably, you can go back afterwards and claim it wasn’t really a surprise in terms of the abstractions that seem so clear and obvious now, but I think it was surprised then; and I also think that “there’s always a smooth abstraction in hindsight, so what, there’ll be one of those when the world ends too”, is a huge big deal in practice with respect to the future being unpredictable.
(As an example, compare Paul Christiano’s post on takeoff speeds from 2018, which is heavily about continuity, to the debate between Paul and Eliezer in late 2021. Despite the participants spending years in discussion, progress on bridging the continuous-discrete gap between them seems very limited.)
Paul and Eliezer have had lots of discussions over the years, but I don’t think they talked about takeoff speeds between the 2018 post and the 2021 debate?
Note that Nate and Eliezer expect there to be some curves you can draw after-the-fact that shows continuity in AGI progress on particular dimensions. They just don’t expect these to be the curves with the most practical impact (and they don’t think we can identify the curves with foresight, in 2022, to make strong predictions about AGI timing or rates of progress).
Yes, but conversely, I could say I’d expect some curves to show discontinuous jumps, mostly in dimensions which no one really cares about. Clearly the cruxes are about discontinuities in dimensions which matter.
As I tried to explain in the post, I think continuity assumptions mostly get you different things than “strong predictions about AGI timing”.
...
My point here isn’t to throw ‘AGI will undergo discontinuous leaps as they learn’ under the bus. Self-rewriting systems likely will (on my models) gain intelligence in leaps and bounds. What I’m trying to say is that I don’t think this disagreement is the central disagreement. I think the key disagreement is instead about where the main force of improvement in early human-designed AGI systems comes from — is it from existing systems progressing up their improvement curves, or from new systems coming online on qualitatively steeper improvement curves?
I would paraphrase this as “assuming discontinuities at every level”—both one-system training, and the more macroscopic exploration in the “space of learning systems”—but stating the key disagreement is about the discontinuities in the space of model architectures, rather than in jumpiness of single model training.
Personally, I don’t think the distinction between ‘movement by learning of a single model’ and ‘movement by scaling’ and ‘movement by architectural changes’ will be necessarily big.
There is, I think, a really basic difference of thinking here, which is that on my view, AGI erupting is just a Thing That Happens and not part of a Historical Worldview or a Great Trend.
This seem more or less support what I wrote? Expecting a Big Discontinuity, and this being a pretty deep difference?
I think the Hansonian viewpoint—which I consider another gradualist viewpoint, and whose effects were influential on early EA and which I think are still lingering around in EA—seemed surprised by AlphaGo and Alpha Zero, when you contrast its actual advance language with what actually happened. Inevitably, you can go back afterwards and claim it wasn’t really a surprise in terms of the abstractions that seem so clear and obvious now, but I think it was surprised then; and I also think that “there’s always a smooth abstraction in hindsight, so what, there’ll be one of those when the world ends too”, is a huge big deal in practice with respect to the future being unpredictable.
My overall impression is Eliezer likes to argue against “Hansonian views”, but something like “continuity assumptions” seem much broader category than Robin’s views.
Paul and Eliezer have had lots of discussions over the years, but I don’t think they talked about takeoff speeds between the 2018 post and the 2021 debate?
In my view continuity assumptions are not just about takeoff speeds. E.g, IDA make much more sense in a continuous world—if you reach a cliff, working IDA should slow down, and warn you. In the Truly Discontinuous world, you just jump off the cliff at some unknown step.
I would guess probably a majority of all debates and disagreements between Paul and Eliezer has some “continuity” component: e.g. the question whether we can learn a lot of important alignment stuff on non-AGI systems is a typical continuity problem, but only tangentially relevant to takeoff speeds.
Note that Nate and Eliezer expect there to be some curves you can draw after-the-fact that shows continuity in AGI progress on particular dimensions. They just don’t expect these to be the curves with the most practical impact (and they don’t think we can identify the curves with foresight, in 2022, to make strong predictions about AGI timing or rates of progress).
Quoting Nate in 2018:
And quoting Eliezer more recently:
And:
And:
Paul and Eliezer have had lots of discussions over the years, but I don’t think they talked about takeoff speeds between the 2018 post and the 2021 debate?
Yes, but conversely, I could say I’d expect some curves to show discontinuous jumps, mostly in dimensions which no one really cares about. Clearly the cruxes are about discontinuities in dimensions which matter.
As I tried to explain in the post, I think continuity assumptions mostly get you different things than “strong predictions about AGI timing”.
I would paraphrase this as “assuming discontinuities at every level”—both one-system training, and the more macroscopic exploration in the “space of learning systems”—but stating the key disagreement is about the discontinuities in the space of model architectures, rather than in jumpiness of single model training.
Personally, I don’t think the distinction between ‘movement by learning of a single model’ and ‘movement by scaling’ and ‘movement by architectural changes’ will be necessarily big.
This seem more or less support what I wrote? Expecting a Big Discontinuity, and this being a pretty deep difference?
My overall impression is Eliezer likes to argue against “Hansonian views”, but something like “continuity assumptions” seem much broader category than Robin’s views.
In my view continuity assumptions are not just about takeoff speeds. E.g, IDA make much more sense in a continuous world—if you reach a cliff, working IDA should slow down, and warn you. In the Truly Discontinuous world, you just jump off the cliff at some unknown step.
I would guess probably a majority of all debates and disagreements between Paul and Eliezer has some “continuity” component: e.g. the question whether we can learn a lot of important alignment stuff on non-AGI systems is a typical continuity problem, but only tangentially relevant to takeoff speeds.