[Intuitive self-models] 2. Conscious Awareness

2.1 Post summary /​ Table of contents

This is the second of a series of eight blog posts, which I’ll be serializing over the next month or two. (Or email or DM me if you want to read the whole thing right now.)

The previous post laid some groundwork for talking about intuitive self-models. Now we’re jumping right into the deep end: the intuitive concept of “conscious awareness” (or “awareness” for short). Some argue (§1.6.2) that if we can fully understand why we have an “awareness” concept, then we will thereby understand phenomenal consciousness itself! Alas, “phenomenal consciousness itself” is outside the scope of this series—again see §1.6.2. Regardless, the “awareness” concept is centrally important to how we conceptualize our own mental worlds, and well worth understanding for its own sake.

In one sense, “awareness” is nothing special: it’s an intuitive concept, built like any other intuitive concept. I can think the thought “a squirrel is in my conscious awareness”, just as I can think the thought “a squirrel is in my glove compartment”.

But in a different sense, “awareness” feels a bit enigmatic. The “glove compartment” concept is a veridical model (§1.3.2) of a tangible thing in a car. Whereas the “awareness” concept is a veridical model of … what exactly, if anything?

I have an answer! The short version of my hypothesis is: The brain algorithm involves the cortex, which has a limited computational capacity that gets deployed serially—you can’t both read health insurance documentation and ponder the meaning of life at the very same moment.[1] When this aspect of the brain algorithm is itself incorporated into a generative model via predictive (a.k.a. self-supervised) learning, it winds up represented as an “awareness” concept, which functions as a kind of abstract container that can hold any other mental concept(s) in it.

  • In Section 2.2, I flesh out that hypothesis above. And relatedly, I introduce an important terminology that I’ll be using throughout the series: For any concept X, I define “S(X)” (“X in a self-reflective frame”) to be the intuitive model wherein X is contained within the “awareness” abstract container.

  • Section 2.3 talks about how awareness unfolds in time as a “stream of consciousness”—an area where our intuitions strikingly depart from reality, once we zoom into sub-second timescales.

  • Section 2.4 covers memory and how it connects to awareness—both in reality and in our intuitive models.

  • Section 2.5 talks about the valence of those S(X) thoughts. I’ll show that this valence is influenced not only by the object-level X being directly motivating, but also by X being associated with “my best self”—the things that fit well with my social image and an appealing narrative of my life. Note the suspicious convergence between the valence of S(X), and the things that we do “deliberately” versus “impulsively”. That brings us to…

  • Section 2.6, where I develop the connection between “awareness”, intentions, and decisions. In particular, I consider the special case of S(A), where A is an action concept like “say hi” or “think about the Roman Empire”—and not just the sensory and semantic consequences of that action (e.g. the idea of saying hi), but the attention-control and/​or motor-control outputs that would make that action actually happen. In this case, I’ll suggest that we intuitively interpret S(A) as an intention to do A. And if this S(A) is immediately followed by the execution of A itself (as is often the case), then we call A a deliberate action—as opposed to an action which is spontaneous, reflexive, instinctive, “blurted out”, etc. As an application of this idea, I’ll explain “illusions of free will”.

That’s still not the whole story of intentions and decisions—it’s missing the critical ingredient of an intuitive agent that actively causes the decisions. That turns out to be a whole giant can of worms, which we’ll tackle in Post 3.

Prior work: From my perspective, my main hypothesis (§2.2) should be “obvious” if you’re familiar with “Global Workspace Theory”[2] and/​or “Attention Schema Theory”—and indeed I found Michael Graziano’s Rethinking Consciousness (2019) to be extremely helpful for clarifying my thinking.[3] Graziano & I have some differences though.[4] Also, §2.3 partly follows chapter 5 of Daniel Dennett’s Consciousness Explained (1991). Once we get into §2.5–§2.6 and the whole rest of the series, I mostly felt like I was figuring things out from scratch—but please let me know if you’ve seen relevant prior literature!

2.2 The “awareness” concept

2.2.1 The cortex has a finite computational capacity that gets deployed serially

What do I mean by that heading? Here are a few different ways to put it:

  • If I’m thinking about calling the plumber, then the various parts of my cortex are busily tracking the various aspects and associations of calling the plumber. If I’m thinking about going to the zoo, then the various parts of my cortex are busily tracking the various aspects and associations of going to the zoo. The cortex is unable to do both those things simultaneously.

  • Think of a system with “attractor dynamics”, like a Hopfield net or Boltzmann machine. It can’t activate two different stored patterns simultaneously. I think the cortex has a vaguely similar property.

  • In terms of the §1.2 discussion, the cortex does probabilistic inference, always homing in on the best generative model. But the way that works entails a rather limited ability to activate and query multiple possible generative models simultaneously. Instead, most of the time, in most of the area of the cortex, I claim there’s mainly just one active generative model, corresponding to a maximum a posteriori (MAP) estimate.

2.2.2 Predictive learning represents that algorithmic property via a kind of abstract container called “awareness”

See text. Diagram on the left is copied from here.

If I say, “This apple is on my mind”, that’s a self-reflective thought. It involves a concept I’m calling “awareness”, and also the concept of “this apple”, and those two concepts are connected by a kind of container-containee relationship.

And I claim that this thought is modeling a situation where the cortex[2] is, at some particular moment, using its finite computational capacity to process my intuitive model of this apple.

More generally:

  • In the intuitive self-model (“map”), any possible thought can be in “awareness” at any given time, but only one thought can be there at a time.

  • Correspondingly, in the actual brain algorithm (“territory”), any possible thought can be represented by the cortex (by definition), but only one thought can be there at a time (to a first approximation, see §2.3 below).

So there’s a map-territory correspondence: awareness is a (somewhat) veridical (§1.3.2) model of this particular aspect of the brain algorithm.

2.2.3 “S(apple)”, defined as the self-reflective thought “apple being in awareness”, is different from the object-level thought “apple”

Illustration of how “apple” and “S(apple)” are two different thoughts. The two purple arrows indicate map–territory correspondences (§1.3.2) between (b) and (a), and between (c) and (b). To be clear, the “territory” for (c) is really “(b) being active in the cortex”, not (b) per se.

Astute readers might be wondering: if the “awareness” concept can itself be part of an intuitive model active in the cortex, then wouldn’t the thought “the apple is in awareness right now” be self-contradictory?

After all, the thing you’re thinking “right now” would be “the apple is in awareness right now”, rather than just “the apple” itself, right?

Yes! In order to think the former thought, you would have to stop thinking of just “the apple” itself, and flip to a different thought, where there’s a frame (in the sense of “frame semantics” in linguistics or “frame languages” in GOFAI) involving the “awareness” concept, and the “apple” concept, interconnected by container-containee relationship.

For various purposes later on, it will be nice to have a shorthand. So S(apple) (read: apple in a self-reflective frame) will denote the apple-is-in-awareness thought. It’s “self-reflective” in the sense that it involves “awareness”, which is part of the intuitive self-model.

2.3 Awareness over time: The “Stream of Consciousness”

[Optional bonus section! You can skip §2.3, and still be able to follow the rest of the series.]

Here’s an aspect of the intuitive “awareness” concept that does not veridically correspond to the algorithmic phenomenon that it’s modeling. Daniel Dennett makes a big deal out of this topic in Consciousness Explained (1991), because it was important for his thesis to find aspects of our intuitive “awareness” concept that is not veridical, and this one seems reasonably clear-cut.

As background, there are various situations where, for events that unfold over the course of some fraction of a second, later sensory inputs are taken into account in how you remember experiencing earlier sensory inputs. Dennett uses an obscure psychology result called “color phi phenomenon” as his main case study, but the phenomenon is quite common, so I’ll use a more everyday example: hearing someone talk.

I’ll start from the computational picture. As discussed in §1.2, your cortex is (either literally or effectively) searching through its space of generative models for one that matches input data and other constraints, via probabilistic inference. Some generative models, like a model that predicts the sound of a word, are extended in time, and therefore the associated probabilistic inference has to be extended in time as well.

So suppose somebody says the word “smile” to me over the course of 0.4 seconds. The actual moment-by-moment activation of my cortex algorithm might look like:

  • From t=0 to t=0.15 seconds, there’s a sound that I can’t yet make out—a number of different incompatible generative models are simultaneously weakly active. There just hasn’t been enough sound yet to make any sense of it.

  • By t=0.3 seconds, the generative model “smile” has won the competition, becoming the active model (posterior).

…But interestingly, if I then immediately ask you what you were experiencing just now, you won’t describe it as above. Instead you’ll say that you were hearing “sm-” at t=0 and “-mi” at t=0.2 and “-ile” at t=0.4. In other words, you’ll recall it in terms of the time-course of the generative model that ultimately turned out to be the best explanation.

So with that as background, here’s how someone might intuitively describe their awareness over time:

Statement: When I’m watching and paying attention to something, I’m constantly aware of it as it happens, moment-by-moment. I might not always remember things perfectly, but there’s a fact of the matter of what I was actually experiencing at any given time.

Intuitive model underlying that statement: Within our intuitive models, there’s a “awareness” concept /​ frame as above, and at any given moment it has some content in it, related to the current sensory input, memories, thoughts, or whatever else we’re paying attention to. The flow of this content through time constitutes a kind of movie, which we might call the “stream of consciousness”. The things that “I” have “experienced” are exactly the things that were frames of that movie. The movie unfolds through time, although it’s possible that I’ll misremember some aspect of it after the fact.

What’s really happening, and is the model veridical in this respect? In the above example of hearing the word “smile”, there was no moment when the beginning part of the word was the active part of the active generative model. When “smi-” was entering our brain, the “smile” generative model was not yet strongly activated—that happened slightly later. But it doesn’t seem to be that way subjectively—we remember hearing the whole word as the beginning, middle, and end of “smile”. So,

Question: was the beginning part of hearing the word “smile” actually “experienced”?

Answer: That question is incoherent, because this is an area where the intuitive model above is not veridical.

Specifically, in the brain algorithm in question, there are two history streams we can talk about:

  • One history stream is related to the moment-by-moment state of the algorithmic processing—at such-and-such moment, which generative models were active in the cortex, and how active were they?

  • The other history stream is the best-guess time-course of what’s happening, stitched together by probabilistic inference, routinely taking advantage of (a fraction of a second of) hindsight.

The “stream-of-consciousness” intuitive model smushes these together into the same thing—just one history stream, labeled “what I was experiencing at that moment”.

That smushing-together is an excellent approximation on a multi-second timescale, but inaccurate if you zoom into what’s happening at sub-second timescales.

So a question like “what was I really experiencing at t=0.1 seconds” doesn’t seem answerable—it’s a question about the “map” (intuitive model) that doesn’t correspond to any well-defined question about the “territory” (the algorithms that the intuitive model was designed to model). Or equivalently, it corresponds equally well to two different questions about the territory, with two different answers, and there’s just no fact of the matter about which is the real answer.

Anyway, the intuitive model, with just one history stream instead of two, is much simpler, while still being perfectly adequate to play the role that it plays in generating predictions (see §1.4). So it’s no surprise that this is the generative model built by the predictive learning algorithm. Indeed, the fact that this aspect of the model is not perfectly veridical is something that basically never comes up in normal life.

2.4 Relation between “awareness” and memory

2.4.1 Intuitive model of memory as a storage archive

Statement: “I remember going to Chicago”

Intuitive model: Long-term memory in general, and autobiographical long-term memory in particular, is some kind of storage archive. Things can get pulled from that archive into the “awareness” abstract container. And there are memories of myself-in-Chicago stored in that box, which can be retrieved deliberately or by random association.

What’s really happening? There’s some brain system (mainly the hippocampus, I think) that stores episodic memories. The memories can get triggered by pattern-matching (a.k.a. “autoassociative memory”), and then the memory and its various associations can activate all around the cortex.

Is the model veridical? Yeah, pretty much. As above, it’s not a veridical model of your brain as a hunk of meat in 3D space, but it is a reasonably veridical model of an aspect of the algorithm that your brain is running.

2.4.2 Intuitive connection between memory and awareness

Statement: “An intimate part of my awareness is its tie to long-term memory. If you show me a video of me going scuba diving this morning, and I absolutely have no memory whatsoever of it, and you can prove that the video is real, well I mean, I don’t know what to say, I must have been unconscious or something!”[5]

Intuitive model: Whatever happens in “awareness” also gets automatically cataloged in the memory storage archive—at least the important stuff, and at least temporarily. And that’s all that’s in the memory storage archive. The memory storage archive just is a (very lossy) history of what’s been in awareness. This connection is deeply integrated into the intuitive model, such that imagining something in memory that was never in awareness, or conversely imagining that there was recently something very exciting and unusual in awareness but that it’s absent from memory, seems like a contradiction, demanding of some exotic explanation like “I wasn’t really conscious”.

Is the model veridical in this respect? Yup, I think this aspect of the intuitive model is veridically capturing the relation between cortex and episodic memory storage within the (normally-functioning) brain algorithm.

2.5 The valence of S(X) thoughts

We have lots of self-reflective thoughts—i.e., thoughts that involve components of the intuitive self-model—such as S(Christmas presents) = the self-reflective idea that Christmas presents are on my mind (see §2.2.3 above). And those thoughts have valence, just like any other thought. Let’s explore that idea and its consequences.

(Warning: I’m using the term “valence” in a specific and idiosyncratic way—see my Valence series.)

The starting question is: What controls the valence of an S(X) model?

Well, it’s the same as anything else—see How does valence get set and adjusted?. One thing that can happen is that S(X) might directly trigger an innate drive, which injects positive or negative valence as a kind of ground truth. Another thing that can happen is: S(X) might have a strong association with /​ implication of some other thought /​ concept C. In that case, we’ll often think of S(X), then think of C, back and forth in rapid succession. And then by TD learning, some of the valence of C will splash onto S(X) (and vice-versa).

That latter dynamic—valence flowing through salient associations—turns out to have some important implications as I’ll discuss next (and more on that in Post 8).

2.5.1 Positive-valence S(X) models often go with “what my best self would do” (other things equal)

Notice how S(⋯) thoughts are “self-reflective”, in the sense that they involve me and my mind, and not just things in the outside world. This is important because it leads to S(⋯) having strong salient associations with other thoughts C that are also self-reflective. After all, if a self-reflective thought is in your head right now, then it’s much likelier for other self-reflective thoughts to pop into your head immediately afterwards.

As a consequence, here are two common examples of factors that influence the valence of S(X):

  • How does S(X) fit in with my social image? A self-reflective thought like S(homework) ≈ “I’m focusing on my homework” has the salient implication “Other people might know that I’m focusing on my homework”, which in turn might be motivating or demotivating.

  • How does S(X) fit in with the narrative of my life? A self-reflective thought like S(homework) ≈ “I’m focusing on my homework” may have the salient implication “I’m following through on my New Year’s Resolution to focus on my homework”, or “I’m making progress towards my goal of becoming rich and famous”.

Contrast either of those with a non-self-reflective (i.e., object-level) thought related to doing my homework, e.g. “What’s the square root of 121 again?”. If I’m thinking about that, then the question of what other people think about me, and how my life plans are going, are less salient.

There’s a pattern here, which is that self-reflective thoughts are more likely to be positive-valence (motivating) if it’s something that we’re proud of, that we like to remember, that we’d like other people to see, etc.

But that’s not the only factor. The object-level is relevant too:

2.5.2 Positive-valence S(X) models also tend to go with X’s that are object-level motivating (other things equal)

For example, if I’m tired, then I want to go to sleep. Maybe going to sleep right now wouldn’t help my social image, and maybe it’s not appealing in the context of the narrative of my life. More generally, maybe I don’t think that “my best self” would be sleeping now, instead of working more. But nevertheless, the self-reflective thought “I’m gonna go to sleep now” will be highly motivating to me, because of its obvious association with /​ implication of sleep itself.

Maybe I’ll even say “Screw being ‘my best self’, I’m tired, I’m going to sleep”.

What’s going on? It’s the same dynamic as above, but this time the salient association of S(X) is X itself. When I think the self-reflective thought S(go to sleep) ≈ “I’m thinking about going to sleep”, some of the things that it tends to bring to mind are object-level thoughts about going to sleep, e.g. the expectation of feeling the soft pillow on my head. Those thoughts are motivating, since I’m tired. And then by TD learning, S(X) winds up with positive valence too.

(Conversely, just as the valence of X splashes onto S(X), by the same logic, the valence of S(X) splashes onto X. More on that below.)

2.6 S(A) as “the intention to immediately do action A”, and the rapid sequence [S(A) ; A] as the signature of a deliberate action

2.6.1 Clarification: Two ways to “think about an action”

I’ll be arguing shortly that, for a voluntary action A, S(A) is the “intention” to immediately do A. You might find this confusing: “Can’t I think self-reflectively about an action, without intending to do that action??” Yes, but … allow me to clarify.

Put aside self-reflective thoughts for a moment; let’s just start at the object level. If “the idea of standing up is on my mind” at some moment, that might mean either or both of two rather different things:

  • Maybe the sensory and semantic consequences of the standing-up action are in my awareness—including, for example, the expected feeling of bodily motion and exertion, and the idea that I’ll wind up standing, and that my chair will wind up empty, etc.;

  • Maybe the standing-up action program itself—i.e., the patterns of motor control and attention control outputs that would collectively make my muscles actually execute the standing-up action—are in my awareness. If they are, and if there’s positive valence keeping it active, then I would find myself immediately actually standing up.

The punchline: When I say “an action A” in this series, it always refers to the second bullet, not the first—an action program, not merely an action idea.

So far that’s all the object-level domain. But there’s an analogous distinction in the self-reflective domain, “S(stand up)” is ambiguous as written. It could be the thought: “standing up (as a thing that could happen) is the occupant of conscious awareness”—i.e., a veridical model of the first bullet point situation above. Or it could be the thought “standing up (the action program itself) is the occupant of conscious awareness”—i.e., a veridical model of the second bullet point situation above.

And just as above, when I say S(A), I’ll always be talking about the latter, not the former; it’s the latter that (I’ll argue) corresponds to an “intention”.

That said, those two aspects of standing up are obviously strongly associated with each other. They can activate simultaneously. And even if they don’t, each tends to bring the other to mind, such that the valence of one influences the valence of the other.

With that aside, let’s get into the substance of this section!

2.6.2 For any action A where S(A) has positive valence, there’s often a two-step temporal sequence: [S(A) ; A actually happens]

In this section I’ll give a kind of first-principles derivation of something that we should expect to happen in brain algorithms, based on the discussion thus far. Then afterwards, I’ll argue that this phenomenon corresponds to our everyday notion of intentions and actions. Here goes:

  • Ingredient 1: In general, S(X) often summons a follow-on thought of X. As mentioned in §2.5.2 above, there’s a strong, salient association between an object-level thing and the corresponding self-reflective way to think about that same thing—each implies the other. So if we’re thinking of X, S(X) may well immediately pop into our heads, and vice-versa.

  • Ingredient 2: If S(X) is positive valence, that makes it more likely (other things equal) for X to wind up positive valence. Again see §2.5.2 above.

  • Ingredient 3: If a voluntary action program A (attention-control, motor-control, or both) is active in awareness, and has positive valence, then it will immediately actually happen. See §2.6.1 above; this is almost the definition of valence, as I use the term.

Put these together, and we conclude that there ought to be a frequent pattern:

  • STEP 1: There’s a self-reflective thought S(A), for some action-program A (motor-control and/​or attention-control), and this thought has positive valence;

  • STEP 2 (a fraction of a second later): The non-self-reflective (a.k.a. object-level) thought A occurs, and this makes the action A actually happen.

2.6.3 This two-step sequence corresponds to “deliberate” /​ “intentional” actions (as opposed to “spontaneously blurting something out”, “acting on instinct”, etc.)

Here are a few reasons that you might believe me on this:

Evidence from introspection: I’m suggesting that (step 1) you think of yourself sending a command to wiggle your fingers, and you find that thought to be motivating (positive valence), and then (step 2) a fraction of a second later, the command is sent and your fingers are actually wiggling. To me, that feels like a pretty good fit to “intentionally” /​ “deliberately” doing something. Whereas “acting on impulses, instincts, reflexes, etc.” seems to be missing the self-reflective step 1 part.

Evidence from the report of an insight meditator: For what it’s worth, meditation guru Daniel Ingram writes here: “In Mind and Body, the earliest insight stage, those who know what to look for and how to leverage this way of perceiving reality will take the opportunity to notice the intention to breathe that precedes the breath, the intention to move the foot that precedes the foot moving, the intention to think a thought that precedes the thinking of the thought, and even the intention to move attention that precedes attention moving.” I claim that’s a good match to what I wrote—S(A) would be the “intention” to do action A.

Evidence from the systematic differences between deliberate actions and spontaneous actions: Consider spontaneous actions like “blurting out”, also called instinctive, reflexive, unthinking, reactive, spontaneous, etc. According to my story, a key difference between these types of actions, versus deliberate actions, is that the valence of S(A) is necessarily positive in deliberate actions, but need not be positive in spontaneous actions. And in §2.5 above, I said that the valence of S(A) is influenced by the valence of A, but S(A) is also influenced by “what my best self would do”—S(A) tends to be more positive for actions A that would positively impact my social image, fit well into the narrative of my life, and so on. And correspondingly, those are exactly the kinds of actions that are more likely to be “deliberate” than “spontaneous”. Good fit!

2.6.4 The common temporal sequence above—i.e. [S(A) with positive valence ; A actually happens]—is itself incorporated into the intuitive self-model. Call it D(A) for “Deciding to do action A”

The whole point of these intuitive generative models is to observe things that often happen, and then expect them to keep happening in the future. So if the [S(A) with positive valence ; A actually happens] pattern happens regularly, of course the brain will incorporate that as an intuitive concept in its generative models. I’ll call it D(A).

2.6.5 An application: “Illusions of free will”

The stereotypical deliberate-action scenario above is:

  • Step 1: S(A) [with positive valence]

  • Step 2: A

Here’s a different scenario:

  • Step 1’: (…not sure, wasn’t paying attention…)

  • Step 2’: A

Now suppose that Step 1’ was not in fact S(A), but that it could have been S(A)—in the specific sense that the hypothesis “what just happened was [S(A) ; A]” is a priori highly plausible and compatible with everything we know about ourselves and what’s happening.

In that case, we should expect the D(A) generative model to activate. Why? It’s just the cortex doing what it always does: using probabilistic inference to find the best generative model given the limited information available. It’s no different from what happens in visual perception: if I see my friend’s head coming up over the hill, I automatically intuitively interpret it as the head of my friend whose body I can’t see; I do not interpret it as my friend’s severed head. The latter would be a priori less plausible than the former.

Anyway, if D(A) activates despite a lack of actual S(A), that would be a (so-called) “illusion of free will”. Examples include the “choice blindness” experiment of Johansson et al. 2005, the “I Spy” and other experiments described in Wegner & Wheatley 1999, some instances of confabulation, and probably some types of “forcing” in stage magic. As another (possible) example, if I’m deeply in a flow state, writing code, and I take an action A = typing a word, then the self-reflective S(A) thought is almost certainly not active (that’s what “flow state” means, see Post 4), but if you ask me after the fact whether I had “decided” to execute action A, I think I would say “yes”.

2.7 Conclusion

I think this is a nice story for how the “conscious awareness” concept comes to exist in our mental worlds, how it relates to other intuitive notions like memory, stream-of-consciousness, intentions, and decisions, and how all these entities in the “map” (intuitive model) relate to corresponding entities in the “territory” (brain algorithms, as designed by the genome).

However, the above story of intentions and decisions is not yet complete! There’s an additional critical ingredient within our intuitive self-models. Not only are there intentions and decisions in our minds, but we also intuitively believe there to be a protagonist—an entity that actively intends our intentions, and decides our decisions, and wills our will! Following Dennett, I’ll call that concept “the homunculus”, and that will be the subject of the next post.

Thanks Thane Ruthenis, lsusr, Seth Herd, Linda Linsefors, and Justis Mills for critical comments on earlier drafts.

  1. ^

    It is, of course, possible to read health insurance documentation and ponder the meaning of life in rapid succession, separated by as little as a fraction of a second. Especially when “pondering the meaning of life” includes nihilism and existential despair! USA readers, you know what I’m talking about.

  2. ^

    It won’t come up again in this series, but I’ll note for completeness that “awareness” is related to the activation state of some parts of the cortex much more than other parts. For example, the primary visual cortex is not interconnected with other parts of the cortex or with long-term memory in the same direct way that many other cortical areas are; hence, you can say that we’re “not directly aware” of what happens in the primary visual cortex. In the lingo, people describe this fact by saying that there’s a “Global Neuronal Workspace” consisting of many (most?) parts of the cortex, but that the primary visual cortex is not one of those parts.

  3. ^

    Relatedly, some bits of text in this post are copied from my earlier post Book Review: Rethinking Consciousness.

  4. ^

    From my perspective, Graziano’s main thesis and my §2.2 are pretty similar in the big picture. I think the biggest difference between his presentation and mine is that we stand at different places on the spectrum from “evolved modularity” to “universal learning machine”. Graziano seems to be more towards the “evolved modularity” end, where he thinks that evolution specifically built “awareness” into the brain to serve as sensory feedback for attention actions, in analogy to how evolution specifically built the somatosensory cortex to serve as sensory feedback for motor actions. By contrast, my belief is much closer to the “universal learning machine” end, where “awareness” (like the rest of the intuitive self-model) comes out of a somewhat generic within-lifetime predictive learning algorithm, involving many of the same brain parts and processes that would create, store, and query an intuitive model of a carburetor.

    Again, that’s all my own understanding. Graziano has not read or endorsed anything in this post.

  5. ^

    I adapted that statement from something Jeff Hawkins said. But tragically, it’s not just a hypothetical: Clive Wearing developed total amnesia 40 years ago, and ever since then “he constantly believes that he has only recently awoken from a comatose state”.