Algorithmic improvements relevant to my argument are those that happen after long-horizon task capable AIs are demonstrated, in particular it doesn’t matter how much progress is happening now, other than as evidence about what happens later.
heavily overtrained by Chinchilla standards
This is necessarily part of it. It involves using more compute, not less, which is natural given that new training environments are getting online, and doesn’t need any algorithmic improvements at all to produce models that are both cheaper for inference and smarter. You can take a Chinchilla optimal model, make it 3x smaller and train it on 9x data, expending 3x more compute, and get approximately the same result. If you up the compute and data a bit more, the model will become more capable. Some current improvements are probably due to better use of pre-training data, but these things won’t survive significant further scaling intact. There are also improvements in post-training, but they are even less relevant to my argument, assuming they are not lagging behind too badly in unlocking the key thresholds of capability.
Algorithmic improvements relevant to my argument are those that happen after long-horizon task capable AIs are demonstrated, in particular it doesn’t matter how much progress is happening now, other than as evidence about what happens later
My apologies, you’re right, I had misunderstood you, and thus we’ve been talking at cross-purposes. You were discussing
…if research capable TAI can lag behind government-alarming long-horizon task capable AI (that does many jobs and so even Robin Hanson starts paying attention)
while I was instead talking about how likely it was that running out of additional money to invest slowed reaching either of these forms of AGI (which I personally view as being likely to happen quite close together, as Leopold also assumes) by enough to make more than a year-or-two’s difference.
Algorithmic improvements relevant to my argument are those that happen after long-horizon task capable AIs are demonstrated, in particular it doesn’t matter how much progress is happening now, other than as evidence about what happens later.
This is necessarily part of it. It involves using more compute, not less, which is natural given that new training environments are getting online, and doesn’t need any algorithmic improvements at all to produce models that are both cheaper for inference and smarter. You can take a Chinchilla optimal model, make it 3x smaller and train it on 9x data, expending 3x more compute, and get approximately the same result. If you up the compute and data a bit more, the model will become more capable. Some current improvements are probably due to better use of pre-training data, but these things won’t survive significant further scaling intact. There are also improvements in post-training, but they are even less relevant to my argument, assuming they are not lagging behind too badly in unlocking the key thresholds of capability.
My apologies, you’re right, I had misunderstood you, and thus we’ve been talking at cross-purposes. You were discussing
while I was instead talking about how likely it was that running out of additional money to invest slowed reaching either of these forms of AGI (which I personally view as being likely to happen quite close together, as Leopold also assumes) by enough to make more than a year-or-two’s difference.