Cute terminology I came up with while talking about this recently: ‘zoom-foom gap’
I like to call accelerating change period where AI helps accelerate further AI but only via working with humans at a less-than-human-contribution level the ‘zoom’ in contrast to the super-exponential artificial-intelligence-independently-improving-itself ‘foom’. Thus, the period we are currently is the ‘zoom’ period, and the oh-shit-we’re-screwed-if-we-don’t-have-AI-alignment period is the foom period. When I talk about the future critical juncture wherein we realize could initiate a foom at then-present technology, but we restrain ourselves because we know we haven’t yet nailed AI alignment, the ‘zoom-foom gap’. This gap could be as little as seconds while a usually-overconfident capabilities engineer pauses just for a moment with their finger over the enter key, or it could be as long as a couple of years while the new model repeatedly fails the safety evaluations in its secure box despite repeated attempts to align it and thus wisely doesn’t get released. ‘Extending the zoom-foom gap’ is thus a key point of my argument for why we should build a model-architecture-agnostic secure evaluation box.
TLDR: I like having a reason to use the term ‘zoom-foom gap’.
Cute terminology I came up with while talking about this recently: ‘zoom-foom gap’
I like to call accelerating change period where AI helps accelerate further AI but only via working with humans at a less-than-human-contribution level the ‘zoom’ in contrast to the super-exponential artificial-intelligence-independently-improving-itself ‘foom’. Thus, the period we are currently is the ‘zoom’ period, and the oh-shit-we’re-screwed-if-we-don’t-have-AI-alignment period is the foom period. When I talk about the future critical juncture wherein we realize could initiate a foom at then-present technology, but we restrain ourselves because we know we haven’t yet nailed AI alignment, the ‘zoom-foom gap’. This gap could be as little as seconds while a usually-overconfident capabilities engineer pauses just for a moment with their finger over the enter key, or it could be as long as a couple of years while the new model repeatedly fails the safety evaluations in its secure box despite repeated attempts to align it and thus wisely doesn’t get released. ‘Extending the zoom-foom gap’ is thus a key point of my argument for why we should build a model-architecture-agnostic secure evaluation box.
TLDR: I like having a reason to use the term ‘zoom-foom gap’.