I certainly haven’t seen this clear a description of how human brains (and therefore people) actually make decisions. I never bothered to write about it, on the excuse that clarifying brain function would accelerate progress toward AGI, and a lack of clear incentives to do the large amount of work to write clearly. So this is now my reference for how people work. It seems pretty self-contained. (And I think progress toward AGI is now so rapid that spreading general knowledge about brain function will probably do more good than harm).
First, I generally concur. This is mostly stuff I thought about a lot over my half-career in computational cognitive neuroscience. My conclusions are largely the same, in the areas I’d thought about. I hadn’t thought as much about alternate self-concepts except for DID (I had a friend with pretty severe DID and reached the same conclusions you have about a “personality” triggering associated memories and valences). And I had not reached this concise definition of “willpower”, although I had the same thought about the evolutionary basis of making it difficult to attend to one thing obsessively. That’s how you either get eaten or fail to explore valuable new thoughts.
In particular, I very much agree with your focus on the valance attached to individual thoughts. This reminded me of a book title, Thoughts Without A Thinker, an application of buddhist psychology theory to psychanalysis. I haven’t read but read and heard about it. I believe it works from much the same framework of understanding why we do things and feel as we do about things, but I don’t remember if it’s using something like the same theory of valence guiding thought.
Now to my couple of addendums.
First is an unfinished thread: I agree that we tend to keep positively-valanced thoughts while discarding negatively-valenced ones, and this leads to produtive brainstorming—that which leads toward (vaguely) predicted rewards. But I’ve heard that many people do a lot of thinking about negative outcomes, too. I am both constitutionally and deliberately against thinking too much about negative things, but I’ve heard that a lot of the current human race spends a lot of time worrying—which I think probably has the same brainstorming dynamic and shares mechanisms with the positively oriented brainstorming. I don’t know how to explain this; I think the avoidance of bad outcomes being a good outcome could do this work, but that’s not how worrying feels—it feels like my thoughts are drawn toward potential bad outcomes even when I have no idea how to avoid them yet.
I don’t have an answer. I have wondered if this is related to the subpopulation of dopamine cells that fire in response to punishments and predicted punishments, but they don’t seem to project to the same areas controlling attention in the PFC (if I recall correctly, which I may not). Anyway, that’s my biggest missing piece in this puzzle.
Now onto “free will” or whatever makes people think that term sounds important. I think it’s a terrible, incoherent term that points at several different important questions about how to properly understand the human self, and whether we bear responsibility and control our own futures.
I think different people have different intuitive models of their selves. I don’t know which are most common. Some people identify with the sum total of their thought, and I think wind up less happy as a result; they assume that their actions reflect their “real” selves, for instance identifying with their depression or anger. I think the notion that “science doesn’t believe in free will” can contribute to that unhelpful identification. This is not what you’re saying, so I’m only addressing phrasing; but I think one pretty common (wrong) conclusion is “science says my thoughts and efforts don’t matter, because my genes and environment determine my outcomes”. You are saying that thoughts and “efforts” (properly understood) are exactly what determine actions and therefore outcomes. I very much agree.
I frame the free will thing a little differently. To borrow from my last comment on free will [then edit it a bit]:
I don’t know what people mean by “free will” and I don’t think they usually do either. [...]
I substitute the term “self-determination” for “free will”, in hopes that that term captures more of what people tend to actually care about in this topic: do I control my own future? Framed this way, I think the answer is more interesting- it’s sort of and sometimes, rather than a simple yes or no.
I think someone who’s really concerned that “free will isn’t real” would say sure [their thoughts] help determine outcomes, but the contents of my consciousness were also determined by previous processes. I didn’t pick them. I’m an observer, not a cause. My conscious awareness is an observer. It causes the future, but it doesn’t choose, it just predicts outcomes.
[...]
So here I think it’s important to break it down further, and ask how someone would want their choices to work in an ideal world (this move is essentially borrowed from Daniel Dennett’s “all the varieties of free will worth wanting”).
I think the most people would ask for is to have their decisions and therefore their outcomes controlled by their beliefs, their knowledge, their values, and importantly, their efforts at making decisions.
I think these are all perfectly valid labels for important aspects of cognition (with lots of overlap among knowledge, beliefs, and values). Effort at making a decision also plays a huge role, and I think that’s a central concern—it seems like I’m working so hard at my decisions, but is that an illusion? I think what we perceive as effort involves more of the conscious predictions you describe [...] It also involves more different types of multi-step cognition, like analyzing progress so far and choosing new strategies for decisions or intermediate conclusions for complex decisions.
(incidentally I did a whole bunch of work on exactly how the brain does [that] process of conscious predictions to choose outcomes. That’s best written up in Neural mechanisms of human decision-making, but that’s still barely worth reading because it’s so neuroscience-specialist-oriented [and sort of a bad compromise among co-authors]).
So my response to people being bothered by being “just” the “continuation of deterministic forces via genetics and experience” is that those are condensed as beliefs, values, knowledge, and skills, and the effort with which those are applied is what determines outcomes and therefore your results and your future. [That’s pretty much exactly the type of free will worth wanting.]
This leaves intact some concerns about forces you’re not conscious of playing a role. Did I decide to do this because it’s the best decision, or because an advertiser or a friend put an association or belief in my head in a way I wouldn’t endorse on reflection? [Or was it some strong valence association I acquired in ways I’ve never explicitly endorsed?] I think those are valid concerns.
So my answer to “am I really in control of my behavior?” is: sometimes, in some ways—and the exceptions are worth figuring out, so we can have more self-determination in the future.
Anyway, that’s some of my $.02 on “free will” in the sense of self-determination. Excellent post!
If I look at my shoe and (voluntarily) pay attention to it, my subsequent thoughts are constrained to be somehow “about” my shoe. This constraint isn’t fully constraining—I might be putting my shoe into different contexts, or thinking about my shoe while humming a song to myself, etc.
By analogy, if I’m anxious, then my subsequent thoughts are (involuntarily) constrained to be somehow “about” the interoceptive feeling of anxiety. Again, this constraint isn’t fully constraining—I might be putting the feeling of anxiety into the context of how everyone hates me, or into the context of how my health is going downhill, or whatever else, and I could be doing both those things while simultaneously zipping up my coat and humming a song, etc.
Anxiety is just one example; I think there’s likewise involuntary attention associated with feeling itchy, feeling in pain, angry, etc.
You can still use the same positively-oriented brainstorming process for figuring out how to avoid bad outcomes. As soon as there’s even a vague idea of avoiding a very bad outcome, that becomes a very good reward prediction after taking the differential. The dopamine system does calculate such differentials, and it seems like the valance system, while probably different from direct reward prediction and more conceptual, should and could also take differentials in useful ways. Valance needs to at least somewhat dependent on context. I don’t think this requires unique mechanisms (although it might have them); it’s sufficient to learn variants of the concepts like “avoiding a really bad event” and then attaching valance to that concept variant.
a lot of the current human race spends a lot of time worrying—which I think probably has the same brainstorming dynamic and shares mechanisms with the positively oriented brainstorming. I don’t know how to explain this; I think the avoidance of bad outcomes being a good outcome could do this work, but that’s not how worrying feels—it feels like my thoughts are drawn toward potential bad outcomes even when I have no idea how to avoid them yet.
If we were not able to think about potentially bad outcomes well, that would a problem as clearly thinking about them is what avoids them, hopefully. But the question is a good one. My first intuition was that maybe the importance of an outcome—in both directions, good and bad—is relevant.
Excellent! As usual, I concur.
I certainly haven’t seen this clear a description of how human brains (and therefore people) actually make decisions. I never bothered to write about it, on the excuse that clarifying brain function would accelerate progress toward AGI, and a lack of clear incentives to do the large amount of work to write clearly. So this is now my reference for how people work. It seems pretty self-contained. (And I think progress toward AGI is now so rapid that spreading general knowledge about brain function will probably do more good than harm).
First, I generally concur. This is mostly stuff I thought about a lot over my half-career in computational cognitive neuroscience. My conclusions are largely the same, in the areas I’d thought about. I hadn’t thought as much about alternate self-concepts except for DID (I had a friend with pretty severe DID and reached the same conclusions you have about a “personality” triggering associated memories and valences). And I had not reached this concise definition of “willpower”, although I had the same thought about the evolutionary basis of making it difficult to attend to one thing obsessively. That’s how you either get eaten or fail to explore valuable new thoughts.
In particular, I very much agree with your focus on the valance attached to individual thoughts. This reminded me of a book title, Thoughts Without A Thinker, an application of buddhist psychology theory to psychanalysis. I haven’t read but read and heard about it. I believe it works from much the same framework of understanding why we do things and feel as we do about things, but I don’t remember if it’s using something like the same theory of valence guiding thought.
Now to my couple of addendums.
First is an unfinished thread: I agree that we tend to keep positively-valanced thoughts while discarding negatively-valenced ones, and this leads to produtive brainstorming—that which leads toward (vaguely) predicted rewards. But I’ve heard that many people do a lot of thinking about negative outcomes, too. I am both constitutionally and deliberately against thinking too much about negative things, but I’ve heard that a lot of the current human race spends a lot of time worrying—which I think probably has the same brainstorming dynamic and shares mechanisms with the positively oriented brainstorming. I don’t know how to explain this; I think the avoidance of bad outcomes being a good outcome could do this work, but that’s not how worrying feels—it feels like my thoughts are drawn toward potential bad outcomes even when I have no idea how to avoid them yet.
I don’t have an answer. I have wondered if this is related to the subpopulation of dopamine cells that fire in response to punishments and predicted punishments, but they don’t seem to project to the same areas controlling attention in the PFC (if I recall correctly, which I may not). Anyway, that’s my biggest missing piece in this puzzle.
Now onto “free will” or whatever makes people think that term sounds important. I think it’s a terrible, incoherent term that points at several different important questions about how to properly understand the human self, and whether we bear responsibility and control our own futures.
I think different people have different intuitive models of their selves. I don’t know which are most common. Some people identify with the sum total of their thought, and I think wind up less happy as a result; they assume that their actions reflect their “real” selves, for instance identifying with their depression or anger. I think the notion that “science doesn’t believe in free will” can contribute to that unhelpful identification. This is not what you’re saying, so I’m only addressing phrasing; but I think one pretty common (wrong) conclusion is “science says my thoughts and efforts don’t matter, because my genes and environment determine my outcomes”. You are saying that thoughts and “efforts” (properly understood) are exactly what determine actions and therefore outcomes. I very much agree.
I frame the free will thing a little differently. To borrow from my last comment on free will [then edit it a bit]:
Anyway, that’s some of my $.02 on “free will” in the sense of self-determination. Excellent post!
Thanks!
FWIW my answer is “involuntary attention” as discussed in Valence §3.3.5 (it also came up in §6.5.2.1 of this series).
If I look at my shoe and (voluntarily) pay attention to it, my subsequent thoughts are constrained to be somehow “about” my shoe. This constraint isn’t fully constraining—I might be putting my shoe into different contexts, or thinking about my shoe while humming a song to myself, etc.
By analogy, if I’m anxious, then my subsequent thoughts are (involuntarily) constrained to be somehow “about” the interoceptive feeling of anxiety. Again, this constraint isn’t fully constraining—I might be putting the feeling of anxiety into the context of how everyone hates me, or into the context of how my health is going downhill, or whatever else, and I could be doing both those things while simultaneously zipping up my coat and humming a song, etc.
Anxiety is just one example; I think there’s likewise involuntary attention associated with feeling itchy, feeling in pain, angry, etc.
Interesting! I think that works.
You can still use the same positively-oriented brainstorming process for figuring out how to avoid bad outcomes. As soon as there’s even a vague idea of avoiding a very bad outcome, that becomes a very good reward prediction after taking the differential. The dopamine system does calculate such differentials, and it seems like the valance system, while probably different from direct reward prediction and more conceptual, should and could also take differentials in useful ways. Valance needs to at least somewhat dependent on context. I don’t think this requires unique mechanisms (although it might have them); it’s sufficient to learn variants of the concepts like “avoiding a really bad event” and then attaching valance to that concept variant.
If we were not able to think about potentially bad outcomes well, that would a problem as clearly thinking about them is what avoids them, hopefully. But the question is a good one. My first intuition was that maybe the importance of an outcome—in both directions, good and bad—is relevant.