Talking like there’s this one simple “predictive algorithm” that we can read out of the brain using neuroscience and overpower to produce better plans… doesn’t seem quite congruous with what humanity actually does to produce its predictions and plans.
Ok. I have the benefit of the intervening years, but talking about “one simple ‘predictive algorithm’” sounds fine to me.
It seems like, in humans, that there’s probably, basically one cortical algorithm, which does some kind of metalearning. And yes, in practice, doing anything complicated involves learning a bunch of more specific mental procedures (for instance, learning to do decomposition and Fermi estimates instead of just doing a gut check, when estimating large numbers), what Paul calls “the machine” in this post. But so what?
Is the concern there that we just don’t understand what kind of optimization is happening in “the machine”? Is the thought that that kind of search is likely to discover how to break out of the box because it will find clever tricks like “capture all of the computing power in the world?”
Ok. I have the benefit of the intervening years, but talking about “one simple ‘predictive algorithm’” sounds fine to me.
It seems like, in humans, that there’s probably, basically one cortical algorithm, which does some kind of metalearning. And yes, in practice, doing anything complicated involves learning a bunch of more specific mental procedures (for instance, learning to do decomposition and Fermi estimates instead of just doing a gut check, when estimating large numbers), what Paul calls “the machine” in this post. But so what?
Is the concern there that we just don’t understand what kind of optimization is happening in “the machine”? Is the thought that that kind of search is likely to discover how to break out of the box because it will find clever tricks like “capture all of the computing power in the world?”