1 - I think 2017 was not that long ago. My hunch is that the low level architecture of the network itself is not a bottleneck yet. I’d lean on more training procedures and algorithms. I’d throw RLHF and MoE as significant developments, and those are even more recent.
2 - I give maybe 30% chance of a stall, in the case little commercial disruption comes of LLMs. I think there will still be enough research going on at the major labs, and even universities at a smaller scale gives a decent chance at efficiency gains and stuff the big labs can incorporate. Then again, if we agree that they won’t build the power plant, that is also my main way of stalling the timeline 10 years. The reason I only put 30% is I’m expecting multi modalities and Aschenbrenner’s “unhobblings” to get the industry a couple more years of chances to find profit.
Both of those seem plausible, though the second point seems fairly different from your original ‘time lines are fundamentally driven by scale and compute’.
That’s fair. Here are some things to consider:
1 - I think 2017 was not that long ago. My hunch is that the low level architecture of the network itself is not a bottleneck yet. I’d lean on more training procedures and algorithms. I’d throw RLHF and MoE as significant developments, and those are even more recent.
2 - I give maybe 30% chance of a stall, in the case little commercial disruption comes of LLMs. I think there will still be enough research going on at the major labs, and even universities at a smaller scale gives a decent chance at efficiency gains and stuff the big labs can incorporate. Then again, if we agree that they won’t build the power plant, that is also my main way of stalling the timeline 10 years. The reason I only put 30% is I’m expecting multi modalities and Aschenbrenner’s “unhobblings” to get the industry a couple more years of chances to find profit.
Both of those seem plausible, though the second point seems fairly different from your original ‘time lines are fundamentally driven by scale and compute’.