Looks like the hardest part in this model is how to ” choose robustly generalizable subproblems and find robustly generalizable solutions to them”, right?
How does one do that in any systematic way? What are the examples from your own research experience where this worked well, or at all?
I once wrote a post claiming that human learning is not computationally efficient: https://www.lesswrong.com/posts/kcKZoSvyK5tks8nxA/learning-is-asymptotically-computationally-inefficient
It looks like the last three years of AI progress suggest that learning is sub-linear in resource use, but probably not logarithmically as I claimed for humans. Looks like the scaling benchmarks show something like capability increase ~ 4th root of model size. https://epoch.ai/data/ai-benchmarking-dashboard