The above has been my main takeaway from learning about how cognition works in humans (I’m still learning, but it seems to me like future learning would only deepen this insight instead of completely changing it).
We’re actually an ensemble of many narrow systems. Some are inherited because they were very useful in our evolutionary history.
But a lot are dynamically generated and regenerated. Our brain has the ability to rewire itself, create and modify its neural circuitry.
We constantly self modify our cognitive architectures (just without any conscious control over it). Maybe our meta machinery for coordinating and generating object level machinery remains intact?
This changes a lot about what I think is possible for intelligence. What “strongly superhuman intelligence” looks like.
A. There are universal non composite algorithms for predicting stimuli in the real world.
Becoming better at prediction transfers across all domains.
B. There are narrow algorithms good at predicting stimuli in distinct domains.
Becoming a good predictor in one domain doesn’t easily transfer to other domains.
Human intelligence being an ensemble makes it seem like we live in a world that looks more like B, than it does like A.
Predicting diverse stimuli involves composing many narrow algorithms. Specialising a neural circuit for predicting stimuli in one domain doesn’t easily transfer to predicting new domains.
The above has been my main takeaway from learning about how cognition works in humans (I’m still learning, but it seems to me like future learning would only deepen this insight instead of completely changing it).
We’re actually an ensemble of many narrow systems. Some are inherited because they were very useful in our evolutionary history.
But a lot are dynamically generated and regenerated. Our brain has the ability to rewire itself, create and modify its neural circuitry.
We constantly self modify our cognitive architectures (just without any conscious control over it). Maybe our meta machinery for coordinating and generating object level machinery remains intact?
This changes a lot about what I think is possible for intelligence. What “strongly superhuman intelligence” looks like.
To illustrate how this matters.
Consider two scenarios:
A. There are universal non composite algorithms for predicting stimuli in the real world. Becoming better at prediction transfers across all domains.
B. There are narrow algorithms good at predicting stimuli in distinct domains. Becoming a good predictor in one domain doesn’t easily transfer to other domains.
Human intelligence being an ensemble makes it seem like we live in a world that looks more like B, than it does like A.
Predicting diverse stimuli involves composing many narrow algorithms. Specialising a neural circuit for predicting stimuli in one domain doesn’t easily transfer to predicting new domains.