The way I think of “activity is modulated dynamically” is:
We’re searching through a space of generative models for the model that best fits the data and lead to the highest reward. The naive strategy would be to execute all the models, and see which one wins the competition. Unfortunately, the space of all possible models is too vast for that strategy to work. At any given time, only a subset of the vast space of all possible generative models is accessible, and only the models in that subset are able to enter the competition. What subset it is can be modulated by context, prior expectations (“you said this cloud is supposed to look like a dog, right?”), etc. I think (vaguely) that there are region-to-region connections within the brain that can be turned on and off, and different models require different configurations of that plumbing in order to fully express themselves. If there’s a strong enough hint that some generative model is promising, that model will flex its muscles and fully actualize itself by creating the appropriate plumbing (region-to-region communication channels) to be properly active and able to flow down predictions.
It’s connecting this sort of “good models get themselves expressed” layer of abstraction to neurons that’s the hard part :) I think future breakthroughs in training RNNs will be a big aid to imagination.
Right now when I pattern-match what tou say onto ANN architectures, I can imagine something like making an RNN from a scale-free network and trying to tune less-connected nodes around different weightings of more-connected nodes. But I expect that in the future, I’ll have much better building blocks for imagining.
In case it helps, my main aids-to-imagination right now are the sequence memory / CHMM story (see my comment here) and Dileep George’s PGM-based vision model and his related follow-up papers like this, plus miscellaneous random other stuff.
The way I think of “activity is modulated dynamically” is:
We’re searching through a space of generative models for the model that best fits the data and lead to the highest reward. The naive strategy would be to execute all the models, and see which one wins the competition. Unfortunately, the space of all possible models is too vast for that strategy to work. At any given time, only a subset of the vast space of all possible generative models is accessible, and only the models in that subset are able to enter the competition. What subset it is can be modulated by context, prior expectations (“you said this cloud is supposed to look like a dog, right?”), etc. I think (vaguely) that there are region-to-region connections within the brain that can be turned on and off, and different models require different configurations of that plumbing in order to fully express themselves. If there’s a strong enough hint that some generative model is promising, that model will flex its muscles and fully actualize itself by creating the appropriate plumbing (region-to-region communication channels) to be properly active and able to flow down predictions.
Or something like that… :-)
It’s connecting this sort of “good models get themselves expressed” layer of abstraction to neurons that’s the hard part :) I think future breakthroughs in training RNNs will be a big aid to imagination.
Right now when I pattern-match what tou say onto ANN architectures, I can imagine something like making an RNN from a scale-free network and trying to tune less-connected nodes around different weightings of more-connected nodes. But I expect that in the future, I’ll have much better building blocks for imagining.
In case it helps, my main aids-to-imagination right now are the sequence memory / CHMM story (see my comment here) and Dileep George’s PGM-based vision model and his related follow-up papers like this, plus miscellaneous random other stuff.