Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw everything into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.
Absolutely. In fact, I think the critical impediment to machine learning being able to learn more useful things from the current amassed neuroscience knowledge is: “but which of these many complicated bits are even worth including in the model?” There’s just too much, and so much is noise, or incompletely understood such that our models of it are incomplete enough to be worse-than-useless.
Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw everything into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.
Absolutely. In fact, I think the critical impediment to machine learning being able to learn more useful things from the current amassed neuroscience knowledge is: “but which of these many complicated bits are even worth including in the model?” There’s just too much, and so much is noise, or incompletely understood such that our models of it are incomplete enough to be worse-than-useless.