I wanted to talk a bit more about what biology may or may not tell us about the ease of AGI.
This OB post discusses the importance of brain hardware differences in intelligence. One of the papers mentioned writes:
It remains open whether humans have truly unique cognitive properties. Experts recognize aspects of imitation, theory of mind, grammatical–syntactical language and consciousness in non-human primates and other
large-brained mammals. This would mean that the outstanding intelligence of humans results not so much from qualitative differences, but from a combination and improvement of these abilities.
It seems plausible to me that the key software innovations for general intelligence appeared long before the evolution of humans, and humans mainly put a record-breaking number of densely packed neurons behind them. Speaking extremely speculatively, it might be that the algorithms used in human cognition get additional layers of abstraction capability (in some form or another) from additional brain hardware. This has interesting implications for throwing more hardware behind a working AGI if the AGI’s algorithms share this characteristic.
I wanted to talk a bit more about what biology may or may not tell us about the ease of AGI.
This OB post discusses the importance of brain hardware differences in intelligence. One of the papers mentioned writes:
It seems plausible to me that the key software innovations for general intelligence appeared long before the evolution of humans, and humans mainly put a record-breaking number of densely packed neurons behind them. Speaking extremely speculatively, it might be that the algorithms used in human cognition get additional layers of abstraction capability (in some form or another) from additional brain hardware. This has interesting implications for throwing more hardware behind a working AGI if the AGI’s algorithms share this characteristic.