How exactly does reward relate to valenced states in humans? In general, what gives rise to pleasure and pain, in addition to (or instead of) the processing of reward signals?
These problems seem important and tractable even if working out the full computational theory of valence might not be. We can distinguish three questions:
What is the high-level functional role of valence? (coarse-grained functionalism)
What evolutionary pressures incentivized valenced experience?
What computational processes constitute valence? (fine-grained functionalism)
Answering #3 would be best, but it seems to me that answering #1 and #2 is far more feasible. A promising and realistic scenario might be discovering a distinction between positive and negative valence from perspectives #1 and 2, and then giving the DeepMind presentation encouraging them to avoid the coarse-grained functional structures and incentives for negative valence. From my incomplete understanding of the consciousness and valence literature, it seems to me that almost all work is contributing to answering question #1 not question #3.
One avenue in this direction might be looking into the interaction between valence and attention. It seems to me that there is an asymmetry there (or at least a canonical way of fixing a zero point). Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience. A couple reasons why I’m optimistic about this direction: First, attention likely bears some intrinsic connection with consciousness (other coarse-grained functional correlates such as commensurability, addiction etc. need not); second, attention manipulation seems like it might be formalizable in a way relevant for machine learning practitioners. (I’m using attention here in the philosophy/neuro sense not the transformer sense)
Very interesting! Thanks for your reply, and I like your distinction between questions:
Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience.
Can you elaborate on this? What is do attention concentration v. diffusion mean? Pain seems to draw attention to itself (and to motivate action to alleviate it). On my normal understanding of “concentration”, pain involves concentration. But I think I’m just unfamiliar with how you / ‘the literature’ use these terms.
The relationship between valence and attention is not clear to me, and I don’t know of a literature which tackles this (though imperativist analyses of valence are related). Here are some scattered thoughts and questions which make me think there’s something important here to be clarified:
There’s a difference between a conscious stimulus having high saliency/intensity and being intrinsically attention focusing. A bright light suddenly strobing in front of you is high saliency, but you can imagine choosing to attend or not to attend to it. It seems to me plausible that negative valence is like this bright light.
High valence states in meditation are achieved via concentration of attention
Positive valence doesn’t seem to entail wanting more of that experience (c.f. there existing non-addictive highs etc.), whereas negative valence does seem to always entail wanting less.
That is all speculative, but I’m more confident that positive and negative valence don’t play the same role on the high-level functional level. It seems to me that this is strong (but not conclusive) evidence that they are also not symmetric at the fine-grained level.
I’d guess a first step towards clarifying all this would be to talk to some researchers on attention.
These problems seem important and tractable even if working out the full computational theory of valence might not be. We can distinguish three questions:
What is the high-level functional role of valence? (coarse-grained functionalism)
What evolutionary pressures incentivized valenced experience?
What computational processes constitute valence? (fine-grained functionalism)
Answering #3 would be best, but it seems to me that answering #1 and #2 is far more feasible. A promising and realistic scenario might be discovering a distinction between positive and negative valence from perspectives #1 and 2, and then giving the DeepMind presentation encouraging them to avoid the coarse-grained functional structures and incentives for negative valence. From my incomplete understanding of the consciousness and valence literature, it seems to me that almost all work is contributing to answering question #1 not question #3.
One avenue in this direction might be looking into the interaction between valence and attention. It seems to me that there is an asymmetry there (or at least a canonical way of fixing a zero point). Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience. A couple reasons why I’m optimistic about this direction: First, attention likely bears some intrinsic connection with consciousness (other coarse-grained functional correlates such as commensurability, addiction etc. need not); second, attention manipulation seems like it might be formalizable in a way relevant for machine learning practitioners. (I’m using attention here in the philosophy/neuro sense not the transformer sense)
Very interesting! Thanks for your reply, and I like your distinction between questions:
Can you elaborate on this? What is do attention concentration v. diffusion mean? Pain seems to draw attention to itself (and to motivate action to alleviate it). On my normal understanding of “concentration”, pain involves concentration. But I think I’m just unfamiliar with how you / ‘the literature’ use these terms.
The relationship between valence and attention is not clear to me, and I don’t know of a literature which tackles this (though imperativist analyses of valence are related). Here are some scattered thoughts and questions which make me think there’s something important here to be clarified:
There’s a difference between a conscious stimulus having high saliency/intensity and being intrinsically attention focusing. A bright light suddenly strobing in front of you is high saliency, but you can imagine choosing to attend or not to attend to it. It seems to me plausible that negative valence is like this bright light.
High valence states in meditation are achieved via concentration of attention
Positive valence doesn’t seem to entail wanting more of that experience (c.f. there existing non-addictive highs etc.), whereas negative valence does seem to always entail wanting less.
That is all speculative, but I’m more confident that positive and negative valence don’t play the same role on the high-level functional level. It seems to me that this is strong (but not conclusive) evidence that they are also not symmetric at the fine-grained level.
I’d guess a first step towards clarifying all this would be to talk to some researchers on attention.