Oh man, I spent so many years of grad school looking up these acronyms and handwriting their full names into the papers I was reading, until I memorized enough of them. The acronym soup is silly, and the formal paper language which ends up obscuring the true confidence level of the observations. So much overstatement of limited evidence… Still, I like this stuff.
One thing I’d like to add is that when working with complicated interaction mechanisms like this that aren’t fully known, I find it super helpful to run computer simulations of competing hypotheses. “Too much BDNF generally spread around → too much excitatory activity → seizures ” is a super obvious pattern that jumps out once you have a model you can play with where you can turn up the BDNF dial and see what happens.
I feel like “interactive models where you can fiddle with the parameters” are an undervalued tool for a lot of situations.
Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw everything into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.
Absolutely. In fact, I think the critical impediment to machine learning being able to learn more useful things from the current amassed neuroscience knowledge is: “but which of these many complicated bits are even worth including in the model?” There’s just too much, and so much is noise, or incompletely understood such that our models of it are incomplete enough to be worse-than-useless.
Oh man, I spent so many years of grad school looking up these acronyms and handwriting their full names into the papers I was reading, until I memorized enough of them. The acronym soup is silly, and the formal paper language which ends up obscuring the true confidence level of the observations. So much overstatement of limited evidence… Still, I like this stuff. One thing I’d like to add is that when working with complicated interaction mechanisms like this that aren’t fully known, I find it super helpful to run computer simulations of competing hypotheses. “Too much BDNF generally spread around → too much excitatory activity → seizures ” is a super obvious pattern that jumps out once you have a model you can play with where you can turn up the BDNF dial and see what happens. I feel like “interactive models where you can fiddle with the parameters” are an undervalued tool for a lot of situations.
Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw everything into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.
Absolutely. In fact, I think the critical impediment to machine learning being able to learn more useful things from the current amassed neuroscience knowledge is: “but which of these many complicated bits are even worth including in the model?” There’s just too much, and so much is noise, or incompletely understood such that our models of it are incomplete enough to be worse-than-useless.