If the relevant behavior of the brain is computable (which seems likely to me), isn’t there then a computable algorithm that does everything that you can do, if not better? I understand if you’re objecting to overly simplistic models, but the idea that there is no one single (meta-)model that is most correct seems wrong in principle if not in present-day practice.
I’m fine with agents being better at achieving their goals than I am, whether or not computational models of the brain succeed. We can model this phenomenon in several ways: algorithms, intelligence, resource availability, conditioning pressures, so on.
But “most correct” isn’t something I feel comfortable applying as a blanket term across all models. If we’re going to talk about the correctness (or maybe “accuracy,” “efficiency,” “utility,” or whatever) of a model, I think we should use goals as a modulus. So we’d be talking about optimal models relative to this or that goal, and a most correct model would be a model that performs best relative to all goals. There isn’t currently such a model, and even if we thought we had one it would only be best in the goals we applied it to. Under those circumstances there wouldn’t be much reason to think that it would perform well under drastically different demands (i.e. that’s something we should be very uncertain about).
The computable algorithm isn’t a meta-model though. It’s just you in a different substrate. It’s not something the agent can run to figure out what to do because it necessarily take more computing power. And there is nothing preventing such a pragmatic agent from having a universe-model that is computable, considering finding a computable algorithm approximating itself, and copying that algorithm over and over.
If the relevant behavior of the brain is computable (which seems likely to me), isn’t there then a computable algorithm that does everything that you can do, if not better? I understand if you’re objecting to overly simplistic models, but the idea that there is no one single (meta-)model that is most correct seems wrong in principle if not in present-day practice.
I’m fine with agents being better at achieving their goals than I am, whether or not computational models of the brain succeed. We can model this phenomenon in several ways: algorithms, intelligence, resource availability, conditioning pressures, so on.
But “most correct” isn’t something I feel comfortable applying as a blanket term across all models. If we’re going to talk about the correctness (or maybe “accuracy,” “efficiency,” “utility,” or whatever) of a model, I think we should use goals as a modulus. So we’d be talking about optimal models relative to this or that goal, and a most correct model would be a model that performs best relative to all goals. There isn’t currently such a model, and even if we thought we had one it would only be best in the goals we applied it to. Under those circumstances there wouldn’t be much reason to think that it would perform well under drastically different demands (i.e. that’s something we should be very uncertain about).
The computable algorithm isn’t a meta-model though. It’s just you in a different substrate. It’s not something the agent can run to figure out what to do because it necessarily take more computing power. And there is nothing preventing such a pragmatic agent from having a universe-model that is computable, considering finding a computable algorithm approximating itself, and copying that algorithm over and over.