All information currently in working memory could potentially become highly weighted when a saliency signal comes along. Through reinforcement learning, I imagine the agent could optimize whatever attention circuit does the loading of information into working memory in order to make this more useful, as part of some sort of learning-to-learn algorithm.
All information currently in working memory could potentially become highly weighted when a saliency signal comes along. Through reinforcement learning, I imagine the agent could optimize whatever attention circuit does the loading of information into working memory in order to make this more useful, as part of some sort of learning-to-learn algorithm.