Intuitive Self-Models

This is a rather ambitious series of blog posts, in that I’ll attempt to explain what’s the deal with consciousness, free will, hypnotism, enlightenment, hallucinations, flow states, dissociation, akrasia, delusions, and more.

(I’ll be serializing the series over the next month or two, but you can email or DM me if you want to read the whole thing right now.)

The starting point for this whole journey is very simple:

  • The brain has a predictive (a.k.a. self-supervised) learning algorithm.

  • This algorithm builds generative models (a.k.a. “intuitive models”) that can predict incoming data.

  • It turns out that, in order to predict incoming data, the algorithm winds up not only building generative models capturing properties of trucks and shoes and birds, but also building generative models capturing properties of the brain algorithm itself.

Those latter models, which I call “intuitive self-models”, wind up including ingredients like conscious awareness, deliberate actions, and the sense of applying one’s will.

That’s a simple idea, but exploring its consequences will take us to all kinds of strange places—plenty to fill up an eight-post series! Here’s the outline:

  • Post 1 (Preliminaries) gives some background on the brain’s predictive learning algorithm, how to think about the “intuitive models” built by that algorithm, how intuitive self-models come about, and the relation of this whole series to Philosophy Of Mind.

  • Post 2 (Conscious Awareness) proposes that our intuitive self-models include an ingredient called “conscious awareness”, and that this ingredient is built by the predictive learning algorithm to represent a serial aspect of cortex computation. I’ll discuss ways in which this model is veridical (faithful to the algorithmic phenomenon that it’s modeling), and ways that it isn’t. I’ll also talk about how intentions and decisions fit into that framework.

  • Post 3 (The Homunculus) focuses more specifically on the intuitive self-model that almost everyone reading this post is experiencing right now (as opposed to the other possibilities covered later in the series), which I call the Conventional Intuitive Self-Model. In particular, I propose that a key player in that model is a certain entity that’s conceptualized as actively causing acts of free will. Following Dennett, I call this entity “the homunculus”, and relate that to intuitions around free will and sense-of-self.

  • Post 4 (Trance) builds a framework to systematize the various types of trance, from everyday “flow states”, to intense possession rituals with amnesia. I try to explain why these states have the properties they do, and to reverse-engineer the various tricks that people use to induce trance in practice.

  • Post 5 (Dissociative Identity Disorder) (a.k.a. “multiple personality disorder”) is a brief opinionated tour of this controversial psychiatric diagnosis. Is it real? Is it iatrogenic? Why is it related to borderline personality disorder (BPD) and trauma? What do we make of the wild claim that each “alter” can’t remember the lives of the other “alters”?

  • Post 6 (Awakening /​ Enlightenment /​ PNSE) is a type of intuitive self-model, typically accessed via extensive meditation practice. It’s quite different from the conventional intuitive self-model. I offer a hypothesis about what exactly the difference is, and why that difference has the various downstream effects that it has.

  • Post 7 (Hearing Voices, and Other Hallucinations) talks about factors contributing to hallucinations—although I argue against drawing a deep distinction between hallucinations versus “normal” inner speech and imagination. I discuss both psychological factors like schizophrenia and BPD; and cultural factors, including some critical discussion of Julian Jaynes’s Origin of Consciousness In The Breakdown Of The Bicameral Mind.

  • Post 8 (De-homunculus-ifying Motivation) suggests that the way people normally talk about motivation and goals is excessively tied to the Conventional Intuitive Self-Model (CISM). This leads to muddled thinking even within the CISM, and leads to straight-up nonsense when applied to other intuitive self-models. I offer an alternative framework that I think works better.

[In­tu­itive self-mod­els] 1. Preliminaries