Saliency-based learning can definitely reduce this problem. Neural network reinforcement learners typically do something similar, e.g. predicting rewards (this is also necessary for other purposes). However, I don’t think it fully solves the problem because it only weights the information that it can immediately identify as being related to what it is seeking, and not the information that may eventually turn out to be useful for what it is seeking. Of course the latter is not really solvable in the general case.
All information currently in working memory could potentially become highly weighted when a saliency signal comes along. Through reinforcement learning, I imagine the agent could optimize whatever attention circuit does the loading of information into working memory in order to make this more useful, as part of some sort of learning-to-learn algorithm.
Saliency-based learning can definitely reduce this problem. Neural network reinforcement learners typically do something similar, e.g. predicting rewards (this is also necessary for other purposes). However, I don’t think it fully solves the problem because it only weights the information that it can immediately identify as being related to what it is seeking, and not the information that may eventually turn out to be useful for what it is seeking. Of course the latter is not really solvable in the general case.
All information currently in working memory could potentially become highly weighted when a saliency signal comes along. Through reinforcement learning, I imagine the agent could optimize whatever attention circuit does the loading of information into working memory in order to make this more useful, as part of some sort of learning-to-learn algorithm.