The first question that we addressed was how prior information about the identity of an upcoming stimulus influences the likelihood of that stimulus entering conscious awareness. Using a novel attentional blink paradigm in which the identity of T1 cued the likelihood of the identity of T2, we showed that stimuli that confirm our expectation have a higher likelihood of gaining access to conscious awareness
or
Second, nonconscious violations of conscious expectations are registered in the human brain Third, however, expectations need to be implemented consciously to subsequently modulate conscious access. These results suggest a differential role of conscious awareness in the hierarchy of predictive processing, in which the active implementation of top-down expectations requires conscious awareness, whereas a conscious expectation and a nonconscious stimulus can interact to generate prediction errors. How these nonconscious prediction errors are used for updating future behavior and shaping trial-by-trial learning is a matter for future experimentation.
My rough takeaway is this: while on surface it may seem that effect of unconscious processing and decision-making is relatively weak, the unconscious processing is responsible for what even gets the conscious awareness. In the FBI metaphor, there is a lot of power in the FBI’s ability to shape what even get’s on the agenda.
The most obvious example of this kind of thing is the “flash of insight” that we all experience from time to time, where a complex, multi-part solution to a problem intrudes on our awareness as if from nowhere. This seems to be a clear case of the unconscious working on this problem in the background, identifying its solution as a valid one stillin the background, and injecting the fully-formed idea into awareness with high salience.
It’s a bit like the phenomenon of being able to pick out your own name from a babble of crowded conversation, except applied to the unconscious activity of the mind. This, however, implies that much complex inter-agent communication and abstract problem solving is happening subconsciously. And this seems to contradict the view that only very simple conceptual packages are passed through to the Global Workspace, and that we must necessarily be conscious of our own abstract problem solving.
My own perceptions during meditation (and during normal life) would suggest that the subconscious/unconscious is doing very complex and abstract “thinking” without my being aware of its workings, and intermittently injecting bits and pieces of its ruminations into awareness based on something like an expectation that the gestalt self might want to act on that information.
This seems contrary to the view that “what we are aware/conscious of” is isomorphic to “the Global Workspace”. It seems that subconscious modules are chattering away amongst themselves almost constantly, using channels that are either inaccessible to consciousness or severely muted.
Dehaene discusses the “flash of insight” example a bit in the section on unconscious processing. I think the general consensus there is that although solutions can be processed unconsciously, this only works after you’ve spent some time thinking about them consciously first. It might be something like, you get an initial understanding during the initial conscious information-sharing. Then when the relevant brain systems have received the information they need to process, they can continue crunching the data unconsciously until they have something to present to the rest of the system.
[The mathematician] Hadamard deconstructed the process of mathematical discovery into four successive stages: initiation, incubation, illumination, and verification. Initiation covers all the preparatory work, the deliberate conscious exploration of a problem. This frontal attack, unfortunately, often remains fruitless—but all may not be lost, for it launches the unconscious mind on a quest. The incubation phase—an invisible brewing period during which the mind remains vaguely preoccupied with the problem but shows no conscious sign of working hard on it—can start. Incubation would remain undetected, were it not for its effects. Suddenly, after a good night’s sleep or a relaxing walk, illumination occurs: the solution appears in all its glory and invades the mathematician’s conscious mind. More often than not, it is correct. However, a slow and effortful process of conscious verification is nevertheless required to nail all the details down. [...]
… an experiment by Ap Dijksterhuis comes closer to Hadamard’s taxonomy and suggests that genuine problem solving may indeed benefit from an unconscious incubation period. The Dutch psychologist presented students with a problem in which they were to choose from among four brands of cars, which differed by up to twelve features. The participants read the problem, then half of them were allowed to consciously think about what their choice would be for four minutes; the other half were distracted for the same amount of time (by solving anagrams). Finally, both groups made their choice. Surprisingly, the distracted group picked the best car much more often than the conscious-deliberation group (60 percent versus 22 percent, a remarkably large effect given that choosing at random would result in 25 percent success). The work was replicated in several real-life situations, such as shopping at IKEA: several weeks after a trip there, shoppers who reported putting a lot of conscious effort into their decision were less satisfied with their purchases than the buyers who chose impulsively, without much conscious reflection.
Although this experiment does not quite meet the stringent criteria for a fully unconscious experience (because distraction does not fully ensure that the subjects never thought about the problem), it is very suggestive: some aspects of problem solving are better dealt with at the fringes of unconsciousness rather than with a full-blown conscious effort. We are not entirely wrong when we think that sleeping on a problem or letting our mind wander in the shower can produce brilliant insights.
I’m not sure why you say that the unconscious modules communicating with each other would necessarily contradict the idea of us being conscious of exactly the stuff that’s in the workspace, but I tend to agree that considering the contents of our consciousness and the contents of the workspace to be strictly isomorphic seems to be too strong. I didn’t go into that because this post was quite long already. But my own experience is that something like Focusing or IFS tends to create things such as weird visualizations that make you go “WTF was that”—and afterwards it feels like something has definitely shifted on an emotional level. Getting various emotional issues into consciousness feels like it brings them into a focus in a way that lets the system re-process them and may e.g. purge old traumas which are no longer relevant—but the parts of the process that are experienced consciously are clearly just the tip of the iceberg, with most of the stuff happening “under the hood”.
This paper also argues something that feels related: Dehaene notes that when we see a chair, we don’t just see the raw sensory data, but rather some sensory data and the concept of a chair, suggesting that the concept of a chair is in the GNW. But what is “a concept of a chair”? The paper argues that according to Dehaene, we have something like the concept of a chair in our consciousness / the GNW, but that this a problem for Dehaene’s theory because we are never actually aware of an entire concept. Concepts generalize over a broader category, but we are only ever aware of individual instances of that category.
The primary function of concepts [...] is to abstract away [...] so that certain aspects of experiences can be regarded as instances of more wide-ranging, similarity-based categories. In fact, according to the conservative view, it is precisely because concepts always transcend the experiences they apply to that they always remain unconscious. Both Prinz (2012) and Jackendoff (2012) underscore this point:
When I look at a chair, try as I may, I only see a specific chair oriented in a particular way. … it’s not clear what it would mean to say that one visually experiences chairness. What kind of experience would that be? A chair seen from no vantage point? A chair from multiple vantage points overlapping? A shape possessed by all chairs? Phenomenologically, these options seem extremely implausible. (Prinz, 2012, p. 74)
Now the interesting thing is that everything you perceive is a particular individual (a token)—you can’t perceive categories (types). And you can only imagine particular individuals—you can’t imagine categories. If you try to imagine a type, say forks in general, your image is still a particular fork, a particular token. (Jackendoff, 2012, p. 130)
[...]
Dehaene (2014, p. 110) expands on the notion that consciousness is like a summary of relevant information by stating that it includes “a multisensory, viewer-invariant, and durable synthesis of the environment.” But neither visual awareness nor any other form of experience contains viewer-invariant representations; on the contrary, possessing a first-person perspective—one that, for sighted people, is typically anchored behind the eyes—is often taken to be a fundamental requirement of bodily self-consciousness (Blanke and Metzinger, 2009). This is quite pertinent to the main topic of this article because, according to the conservative view, one of the reasons why concepts cannot reach awareness is because they always generalize over particular perspectives. This key insight is nicely captured by Prinz (2012, p. 74) in the passage quoted earlier, where he makes what is essentially the following argument: the concept of a chair is viewer-invariant, which is to say that it covers all possible vantage points; however, it is impossible to see or imagine a chair “from no vantage point” or “from multiple vantage points overlapping”; therefore, it is impossible to directly experience the concept of a chair, that is, “chairness” in the most general sense.
In another part of his book, Dehaene (2014, pp. 177–78) uses the example of Leonardo da Vinci’s Mona Lisa to illustrate his idea that a conscious state is underpinned by millions of widely distributed neurons that represent different facets of the experience and that are functionally integrated through bidirectional, rapidly reverberating signals. Most importantly for present purposes, he claims that when we look at the classic painting, our global workspace of awareness includes not just its visual properties (e.g., the hands, eyes, and “Cheshire cat smile”), but also “fragments of meaning,” “a connection to our memories of Leonardo’s genius,” and “a single coherent interpretation,” which he characterizes as “a seductive Italian woman.” This part of the book clearly reveals Dehaene’s endorsement of the liberal view that concepts are among the kinds of information that can reach consciousness. The problem, however, is that he does not explicitly defend this position against the opposite conservative view, which denies that we can directly experience complex semantic structures like the one expressed by the phrase “a seductive Italian woman.” The meaning of the word seductive, for instance, is highly abstract, since it applies not only to the nature of Mona Lisa’s smile, but also to countless other visual and non-visual stimuli that satisfy the conceptual criteria of, to quote from Webster’s dictionary, “having tempting qualities.” On the one hand, it is reasonable to suppose that there is something it is inimitably like, phenomenologically speaking, to perceive particular instances of seductive stimuli, such as Mona Lisa’s smile. But on the other hand, it is extremely hard to imagine how anyone could directly experience seductiveness in some sort of general, all-encompassing sense.
Which is an interesting point, in that on the other hand, before I read that it felt clear to me that if I e.g. look at my laptop, I see “my laptop”… but now that I read this and introspect on my experience of seeing my laptop, there’s nothing that would make my mind spontaneously go “that’s my laptop”, rather the name of the object is something that’s available for me if I explicitly query it, but it’s not around otherwise.
Which would seem to contradict (one reading of) Dehaene’s model—mainly the claim that when we see a laptop, the general concept of the laptop is somehow being passed around in the workspace in its entirety. My best guess so far would be to say that what gets passed around in our consciousness is something like a “pointer” (in a loose metaphoric sense, not in the sense of a literal computer science pointer) to a general concept, which different brain systems can then retrieve and synchronize around in the background. And they might be doing all kind of not-consciously-experienced joint processing of that concept that’s being pointed to, either on a level of workspace communication that isn’t consciously available, or through some other communication channel entirely.
There’s also been some work going under the name of the heterogeneity hypothesis of concepts, suggesting that the brain doesn’t have any such thing as “the concept of a chair” or “the concept of a laptop”. Rather there are many different brain systems that store information in different formats for different purposes, and while many of them might have data structures that are pointing to the same real-life thing, those structures are all quite different and not mutually compatible and describing different aspects of the thing. So maybe there isn’t a single “laptop” concept being passed around, but rather just some symbol which tells each subsystem to retrieve their own equivalent of “laptop” and do… something… with it.
The freedom to speculate wildly is what makes this topic so fun.
My mental model would say, you have a particular pattern recognition module that classifies objects as “chair”, along with a weight of how well the current instance matches the category. An object can be a prototypical perfect Platonic chair, or an almost-chair, or maybe a chair if you flip it over, or not a chair at all.
When you look at a chair, this pattern recognition module immediately classifies it, and then brings online another module, which makes available all the relevant physical affordances, linguistic and logical implications of a chair being present in your environment. Recognizing something as a chair feels identical to recognizing something as a thing-in-which-I-can-sit. Similarly, you don’t have to puzzle out the implications of a tiger walking into the room right now. The fear response will coincide with the recognition of the tiger.
When you try to introspect on chairness, what you’re doing is tossing imagined sense percepts at yourself and observing the responses of the chariness detecting module. This allows you to generate an abstract representation of your own chairness classifier. But this abstract representation is absolutely not the same thing as the chairness classifier, any more than your abstract cogitation about what the “+” operator does is the same thing as the mental operation of adding two numbers together.
I think a lot of confusion about the nature of human thinking stems from the inability to internally distinguish between the abstracted symbol for a mental phenomenon and the mental phenomenon itself. This dovetails with IFS in an interesting way, in that it can be difficult to distinguish between thinking about a particular Part in the abstract, and actually getting into contact with that Part in a way that causes it to shift.
I’m not sure why you say that the unconscious modules communicating with each other would necessarily contradict the idea of us being conscious of exactly the stuff that’s in the workspace, but I tend to agree that considering the contents of our consciousness and the contents of the workspace to be strictly isomorphic seems to be too strong.
I may be simply misunderstanding something. My sense is that when you open the fridge to get a yogurt and your brain shouts “HOW DID CYPHER GET INTO THE MATRIX TO MEET SMITH WITHOUT SOMEONE TO HELP HIM PLUG IN?”, this is a kind of thought that arises from checking meticulously over your epistemic state for logical inconsistencies, rather esoteric and complex logical inconsistencies, and it seems to come from nowhere. Doesn’t this imply that some submodules of your brain are thinking abstractly and logically about The Matrix completely outside of your conscious awareness? If so, then this either implies that the subconscious processing of individual submodules can be very complex and abstract without needing to share information with other submodules, or that information sharing between submodules can occur without you being consciously aware of it.
A third possibility would be that you were actually consciously thinking about The Matrix in a kind of inattentive, distracted way, and it only seems like the thought came out of nowhere. This would be far from the most shocking example of the brain simply lying to you about its operations.
To my reading, all of this seems to pretty well match a (part of) the Buddhist notion of dependent origination, specifically the way senses beget sense contact (experience) begets feeling begets craving (preferences) begets clinging (beliefs/values) begets being (formal ontology). There the focus is a bit different and is oriented around addressing a different question, but I think it’s tackling some of the same issues via different methods.
When you look at a chair, this pattern recognition module immediately classifies it, and then brings online another module, which makes available all the relevant physical affordances, linguistic and logical implications of a chair being present in your environment. Recognizing something as a chair feels identical to recognizing something as a thing-in-which-I-can-sit. Similarly, you don’t have to puzzle out the implications of a tiger walking into the room right now. The fear response will coincide with the recognition of the tiger.
Yeah, this is similar to how I think of it. When I see something, the thoughts which are relevant for the context become available: usually naming the thing isn’t particularly necessary, so I don’t happen to consciously think of its name.
Doesn’t this imply that some submodules of your brain are thinking abstractly and logically about The Matrix completely outside of your conscious awareness? If so, then this either implies that the subconscious processing of individual submodules can be very complex and abstract without needing to share information with other submodules, or that information sharing between submodules can occur without you being consciously aware of it.
Well, we already know from the unconscious priming experiments that information-sharing between submodules can occur without conscious awareness. It could be something like, if you hadn’t been conscious of watching The Matrix, the submodules would never have gotten a strong enough signal about its contents to process it; but once the movie was once consciously processed, there’s enough of a common reference for several related submodules to “know what the other is talking about”.
Or maybe it’s all in one submodule; the fact that that submodule feels a need to make its final conclusion conscious, suggests that it can’t communicate the entirety of its thinking purely unconsciously.
There is a fascinating not yet really explored territory between the GWT and predictive processing.
For example how it may look: there is a paper on Dynamic interactions between top-down expectations and conscious from 2018, where they do experiments in the “blink of mind” style and prediction, and discover, for example
or
My rough takeaway is this: while on surface it may seem that effect of unconscious processing and decision-making is relatively weak, the unconscious processing is responsible for what even gets the conscious awareness. In the FBI metaphor, there is a lot of power in the FBI’s ability to shape what even get’s on the agenda.
The most obvious example of this kind of thing is the “flash of insight” that we all experience from time to time, where a complex, multi-part solution to a problem intrudes on our awareness as if from nowhere. This seems to be a clear case of the unconscious working on this problem in the background, identifying its solution as a valid one still in the background, and injecting the fully-formed idea into awareness with high salience.
It’s a bit like the phenomenon of being able to pick out your own name from a babble of crowded conversation, except applied to the unconscious activity of the mind. This, however, implies that much complex inter-agent communication and abstract problem solving is happening subconsciously. And this seems to contradict the view that only very simple conceptual packages are passed through to the Global Workspace, and that we must necessarily be conscious of our own abstract problem solving.
My own perceptions during meditation (and during normal life) would suggest that the subconscious/unconscious is doing very complex and abstract “thinking” without my being aware of its workings, and intermittently injecting bits and pieces of its ruminations into awareness based on something like an expectation that the gestalt self might want to act on that information.
This seems contrary to the view that “what we are aware/conscious of” is isomorphic to “the Global Workspace”. It seems that subconscious modules are chattering away amongst themselves almost constantly, using channels that are either inaccessible to consciousness or severely muted.
Dehaene discusses the “flash of insight” example a bit in the section on unconscious processing. I think the general consensus there is that although solutions can be processed unconsciously, this only works after you’ve spent some time thinking about them consciously first. It might be something like, you get an initial understanding during the initial conscious information-sharing. Then when the relevant brain systems have received the information they need to process, they can continue crunching the data unconsciously until they have something to present to the rest of the system.
I’m not sure why you say that the unconscious modules communicating with each other would necessarily contradict the idea of us being conscious of exactly the stuff that’s in the workspace, but I tend to agree that considering the contents of our consciousness and the contents of the workspace to be strictly isomorphic seems to be too strong. I didn’t go into that because this post was quite long already. But my own experience is that something like Focusing or IFS tends to create things such as weird visualizations that make you go “WTF was that”—and afterwards it feels like something has definitely shifted on an emotional level. Getting various emotional issues into consciousness feels like it brings them into a focus in a way that lets the system re-process them and may e.g. purge old traumas which are no longer relevant—but the parts of the process that are experienced consciously are clearly just the tip of the iceberg, with most of the stuff happening “under the hood”.
This paper also argues something that feels related: Dehaene notes that when we see a chair, we don’t just see the raw sensory data, but rather some sensory data and the concept of a chair, suggesting that the concept of a chair is in the GNW. But what is “a concept of a chair”? The paper argues that according to Dehaene, we have something like the concept of a chair in our consciousness / the GNW, but that this a problem for Dehaene’s theory because we are never actually aware of an entire concept. Concepts generalize over a broader category, but we are only ever aware of individual instances of that category.
Which is an interesting point, in that on the other hand, before I read that it felt clear to me that if I e.g. look at my laptop, I see “my laptop”… but now that I read this and introspect on my experience of seeing my laptop, there’s nothing that would make my mind spontaneously go “that’s my laptop”, rather the name of the object is something that’s available for me if I explicitly query it, but it’s not around otherwise.
Which would seem to contradict (one reading of) Dehaene’s model—mainly the claim that when we see a laptop, the general concept of the laptop is somehow being passed around in the workspace in its entirety. My best guess so far would be to say that what gets passed around in our consciousness is something like a “pointer” (in a loose metaphoric sense, not in the sense of a literal computer science pointer) to a general concept, which different brain systems can then retrieve and synchronize around in the background. And they might be doing all kind of not-consciously-experienced joint processing of that concept that’s being pointed to, either on a level of workspace communication that isn’t consciously available, or through some other communication channel entirely.
There’s also been some work going under the name of the heterogeneity hypothesis of concepts, suggesting that the brain doesn’t have any such thing as “the concept of a chair” or “the concept of a laptop”. Rather there are many different brain systems that store information in different formats for different purposes, and while many of them might have data structures that are pointing to the same real-life thing, those structures are all quite different and not mutually compatible and describing different aspects of the thing. So maybe there isn’t a single “laptop” concept being passed around, but rather just some symbol which tells each subsystem to retrieve their own equivalent of “laptop” and do… something… with it.
I dunno, I’m just speculating wildly. :)
The freedom to speculate wildly is what makes this topic so fun.
My mental model would say, you have a particular pattern recognition module that classifies objects as “chair”, along with a weight of how well the current instance matches the category. An object can be a prototypical perfect Platonic chair, or an almost-chair, or maybe a chair if you flip it over, or not a chair at all.
When you look at a chair, this pattern recognition module immediately classifies it, and then brings online another module, which makes available all the relevant physical affordances, linguistic and logical implications of a chair being present in your environment. Recognizing something as a chair feels identical to recognizing something as a thing-in-which-I-can-sit. Similarly, you don’t have to puzzle out the implications of a tiger walking into the room right now. The fear response will coincide with the recognition of the tiger.
When you try to introspect on chairness, what you’re doing is tossing imagined sense percepts at yourself and observing the responses of the chariness detecting module. This allows you to generate an abstract representation of your own chairness classifier. But this abstract representation is absolutely not the same thing as the chairness classifier, any more than your abstract cogitation about what the “+” operator does is the same thing as the mental operation of adding two numbers together.
I think a lot of confusion about the nature of human thinking stems from the inability to internally distinguish between the abstracted symbol for a mental phenomenon and the mental phenomenon itself. This dovetails with IFS in an interesting way, in that it can be difficult to distinguish between thinking about a particular Part in the abstract, and actually getting into contact with that Part in a way that causes it to shift.
I may be simply misunderstanding something. My sense is that when you open the fridge to get a yogurt and your brain shouts “HOW DID CYPHER GET INTO THE MATRIX TO MEET SMITH WITHOUT SOMEONE TO HELP HIM PLUG IN?”, this is a kind of thought that arises from checking meticulously over your epistemic state for logical inconsistencies, rather esoteric and complex logical inconsistencies, and it seems to come from nowhere. Doesn’t this imply that some submodules of your brain are thinking abstractly and logically about The Matrix completely outside of your conscious awareness? If so, then this either implies that the subconscious processing of individual submodules can be very complex and abstract without needing to share information with other submodules, or that information sharing between submodules can occur without you being consciously aware of it.
A third possibility would be that you were actually consciously thinking about The Matrix in a kind of inattentive, distracted way, and it only seems like the thought came out of nowhere. This would be far from the most shocking example of the brain simply lying to you about its operations.
To my reading, all of this seems to pretty well match a (part of) the Buddhist notion of dependent origination, specifically the way senses beget sense contact (experience) begets feeling begets craving (preferences) begets clinging (beliefs/values) begets being (formal ontology). There the focus is a bit different and is oriented around addressing a different question, but I think it’s tackling some of the same issues via different methods.
Yeah, this is similar to how I think of it. When I see something, the thoughts which are relevant for the context become available: usually naming the thing isn’t particularly necessary, so I don’t happen to consciously think of its name.
Well, we already know from the unconscious priming experiments that information-sharing between submodules can occur without conscious awareness. It could be something like, if you hadn’t been conscious of watching The Matrix, the submodules would never have gotten a strong enough signal about its contents to process it; but once the movie was once consciously processed, there’s enough of a common reference for several related submodules to “know what the other is talking about”.
Or maybe it’s all in one submodule; the fact that that submodule feels a need to make its final conclusion conscious, suggests that it can’t communicate the entirety of its thinking purely unconsciously.