I’m fine with agents being better at achieving their goals than I am, whether or not computational models of the brain succeed. We can model this phenomenon in several ways: algorithms, intelligence, resource availability, conditioning pressures, so on.
But “most correct” isn’t something I feel comfortable applying as a blanket term across all models. If we’re going to talk about the correctness (or maybe “accuracy,” “efficiency,” “utility,” or whatever) of a model, I think we should use goals as a modulus. So we’d be talking about optimal models relative to this or that goal, and a most correct model would be a model that performs best relative to all goals. There isn’t currently such a model, and even if we thought we had one it would only be best in the goals we applied it to. Under those circumstances there wouldn’t be much reason to think that it would perform well under drastically different demands (i.e. that’s something we should be very uncertain about).
I’m fine with agents being better at achieving their goals than I am, whether or not computational models of the brain succeed. We can model this phenomenon in several ways: algorithms, intelligence, resource availability, conditioning pressures, so on.
But “most correct” isn’t something I feel comfortable applying as a blanket term across all models. If we’re going to talk about the correctness (or maybe “accuracy,” “efficiency,” “utility,” or whatever) of a model, I think we should use goals as a modulus. So we’d be talking about optimal models relative to this or that goal, and a most correct model would be a model that performs best relative to all goals. There isn’t currently such a model, and even if we thought we had one it would only be best in the goals we applied it to. Under those circumstances there wouldn’t be much reason to think that it would perform well under drastically different demands (i.e. that’s something we should be very uncertain about).