If something is both a vanguard and limited, then it seemingly can’t stay a vanguard for long. I see a few different scenarios going forward:
We pause AI development while LLMs are still the vanguard.
The data limitation is overcome with something like IDA or Debate.
LLMs are overtaken by another AI technology, perhaps based on RL.
In terms of relative safety, it’s probably 1 > 2 > 3. Given that 2 might not happen in time, might not be safe if it does, or might still be ultimately outcompeted by something else like RL, I’m not getting very optimistic about AI safety just yet.
If something is both a vanguard and limited, then it seemingly can’t stay a vanguard for long. I see a few different scenarios going forward:
We pause AI development while LLMs are still the vanguard.
The data limitation is overcome with something like IDA or Debate.
LLMs are overtaken by another AI technology, perhaps based on RL.
In terms of relative safety, it’s probably 1 > 2 > 3. Given that 2 might not happen in time, might not be safe if it does, or might still be ultimately outcompeted by something else like RL, I’m not getting very optimistic about AI safety just yet.