while this paradigm of ‘training a model that’s an agi, and then running it at inference’ is one way we get to transformative agi, i find myself thinking that probably WON’T be the first transformative AI, because my guess is that there are lots of tricks using lots of compute at inference to get not quite transformative ai to transformative ai.
Agreed that this is far from the only possibility, and we have some discussion of increasing inference time to make the final push up to generality in the bit beginning “If general intelligence is achievable by properly inferencing a model with a baseline of capability that is lower than human-level...” We did a bit more thinking around this topic which we didn’t think was quite core to the post, so Connor has written it up on his blog here: https://arcaderhetoric.substack.com/p/moravecs-sea
and i doubt that these tricks can funge against train time compute, as you seem to be assuming in your analysis.
Our method 5 is intended for this case—we’d use an appropriate ‘capabilities per token’ multiplier to account for needing extra inference time to reach human level.
It doesn’t prevent (1) but it does make it less likely. A ‘barely general’ AGI is less likely to be able to escape control than an ASI. It doesn’t prevent (2). We acknowledge (3) in section IV: “We can also incorporate multiple firms or governments building AGI, by multiplying the initial AGI population by the number of such additional AGI projects. For example, 2x if we believe China and the US will be the only two projects, or 3x if we believe OpenAI, Anthropic, and DeepMind each achieve AGI.” We think there are likely to be a small number of companies near the frontier, so this is likely to be a modest multiplier. Re. (4), I think ryan_b made relevant points. I would expect some portion of compute to be tied up in long-term contracts. I agree that I would expect the developer of AGI to be able to increase their access to compute over time, but it’s not obvious to me how fast that would be.
I mostly agree on this one, though again think it makes (1) less likely for the same reason. As you say, the implementation details matter for (3) and (4), and it’s not clear to me that it ‘probably’ wouldn’t prevent them. It might be that a pause would target all companies near the frontier, in which case we could see a freeze at AGI for its developer, and near AGI for competitors.
Again, mostly agreed. I think it’s possible that the development of AGI would precipitate a wider change in attitude towards it, including at other developers. Maybe it would be exactly what is needed to make other firms take the risks seriously. Perhaps it’s more likely it would just provide a clear demonstration of a profitable path and spur further acceleration though. Again, we see (3) as a modest multiplier.
The question of training next-generation even-more-powerful AGIs is relevant to containment, and is therefore relevant to how long a relatively stable period running a ‘first generation AGI’ might last. It doesn’t prevent (2) ad (3). It doesn’t prevent (4) either, though presumably a next-gen AGI would further increase a company’s ability in this regard.