So the pattern is: increasing human optimization power steadily pushing up architecture complexity is occasionally upset/reset by a new simpler more general model,
...
So what can we do? In the worst case we have near-zero control over AGI architecture or learning algorithms. So that only leaves initial objective/utility functions, compute and training environment/data. Compute restriction is obvious and has an equally obvious direct tradeoff with capability—not much edge there.
Interesting that ‘less control’ is going hand in hand with ‘simpler models’.
Interesting that ‘less control’ is going hand in hand with ‘simpler models’.