I think this is a good description of the problem. The fact that Einstein’s brain had a similar amount of compute and data, similar overall architecture, similar fundamental learning algorithm means that a brain-like algorithm can substantially improve in capability without big changes to these things.
How similar to the brain’s learning algorithm does an ML algorithm have to be before we should expect similar effects? That seems unclear to me.
I think a lot of people who try to make forecasts about AI progress are greatly underestimating the potential impact of algorithm development, and how the rate of algorithmic progress could be accelerated by large-scale automated searches by sub-AGI models like GPT-5.
I think this is a good description of the problem. The fact that Einstein’s brain had a similar amount of compute and data, similar overall architecture, similar fundamental learning algorithm means that a brain-like algorithm can substantially improve in capability without big changes to these things. How similar to the brain’s learning algorithm does an ML algorithm have to be before we should expect similar effects? That seems unclear to me. I think a lot of people who try to make forecasts about AI progress are greatly underestimating the potential impact of algorithm development, and how the rate of algorithmic progress could be accelerated by large-scale automated searches by sub-AGI models like GPT-5.
A related market I have on manifold:
https://manifold.markets/NathanHelmBurger/gpt5-plus-scaffolding-and-inference
https://manifold.markets/NathanHelmBurger/1hour-agi-a-system-capable-of-any-c
A related comment I made on a different post:
https://www.lesswrong.com/posts/sfWPjmfZY4Q5qFC5o/why-i-m-doing-pauseai?commentId=p2avaaRpyqXnMrvWE