What this is suggesting to me is that if OpenAI didn’t bet on LLMs, we effectively wouldn’t have gotten more time to do alignment research, because most alignment research done before an understanding of LLMs would have been a dead end. And that actually solving alignment may require people who have internalized the paradigm shift represented by LLMs and figuring out solutions based on that. Under this model, even if we are in an insight-constrained world, OpenAI mostly hasn’t burned away effective years of alignment research (because alignment research carried out before we had LLMs would have been mostly useless anyway).
Here’s a paraphrase of the way I take you to be framing the question. Please let me know if I’m distorting it in my translation.
We often talk about ‘the timeline to AGI’ as a resource that can be burned. We want to have as much time as we can to prepare before the end. But that’s actually not quite right. The relevant segment of time is not (from “as soon as we notice the problem” to “the arrival of AGI”) it’s (from “as soon as we can make real technical headway on the problem” to “the arrival of AGI”). We’ll call that second time-segment “preparation time”.
The development of LLMs maybe did bring the date of AGI towards us, but it also pulled forward the start of the “preparation time clock”.
In fact it was maybe feasible that the “preparation time” clock might have started only just before AGI, or not at all.
So all things considered, the impact of pulling the start time forward seems much larger than the impact of pulling the time of AGI forward.
So in evaluating that, the key question here is whether LLMs were on the critical path already.
Is it more like...
We’re going to get AGI at some point and we might or might not have gotten LLMs before that.
or
It was basically invertible that we get LLMs before AGI. LLMs “always” come X years ahead of AGI.
or
It was basically inevitable that we get LLMs before AGI, but there’s a big range of when they can arrive relative to AGI.
And OpenAI made the gap between LLMs and AGI bigger than the counterfactual.
or
And OpenAI made the gap between LLMs and AGI smaller than the counterfactual.
My guess is that the true answer is closest to the second option: LLMs happen a predictable-ish period ahead of AGI, in large part because they’re impressive enough and generally practical enough to drive AGI development.
Here’s a paraphrase of the way I take you to be framing the question. Please let me know if I’m distorting it in my translation.
How’s that as a summary?
So in evaluating that, the key question here is whether LLMs were on the critical path already.
Is it more like...
We’re going to get AGI at some point and we might or might not have gotten LLMs before that.
or
It was basically invertible that we get LLMs before AGI. LLMs “always” come X years ahead of AGI.
or
It was basically inevitable that we get LLMs before AGI, but there’s a big range of when they can arrive relative to AGI.
And OpenAI made the gap between LLMs and AGI bigger than the counterfactual.
or
And OpenAI made the gap between LLMs and AGI smaller than the counterfactual.
My guess is that the true answer is closest to the second option: LLMs happen a predictable-ish period ahead of AGI, in large part because they’re impressive enough and generally practical enough to drive AGI development.
Thank you, that seems exactly correct.