A short version would be that I’m pretty optimistic at the moment about what path to capabilities greedy incentives are going to push us down, and I strongly suspect that the scariest possible architectures/techniques are actually repulsive to the optimizer-that-the-AI-industry-is.
To uncover the generators of this, I think one of the reasons for this is because inductive biases turned out to matter little, enabling you to avoid having to do simulated evolution, which is where I think a lot of danger lies, combined with sparse RL not generally working very well on low compute, and AI early on needing a surprising amount of structure/world models, allowing you to somewhat safely automate research.
To uncover the generators of this, I think one of the reasons for this is because inductive biases turned out to matter little, enabling you to avoid having to do simulated evolution, which is where I think a lot of danger lies, combined with sparse RL not generally working very well on low compute, and AI early on needing a surprising amount of structure/world models, allowing you to somewhat safely automate research.