This is listed as a mistake. But I don’t know that the alternative is to “not view my life through narratives”.
One alternative model is that humans run on narratives, and as such you need to be good at building good narratives for yourself that cut to what you care about and capture key truths, as opposed to narratives that are solely based on what people will reward you for saying about yourself or something else primarily socially mediated rather than mediated by your goals and how reality works.
Insofar as that model is accurate, I somewhat suspect I will read another post 5 years from now (similar to Hazar’s post “How to Ignore Your Emotions (while also thinking you’re awesome at emotions)”) where you’ll say “I found out the narratives I told myself about myself were hurting me, so I decided to become someone who didn’t believe narratives about himself, and it turns out this just worked to hide narratives from my introspective processes, and I hurt myself by acting according to pretty dumb narratives that I couldn’t introspect on. Now I instead am conscious of the narratives I live by, and work to change them as evidence comes in. I agree there are surely faults with them, but I think that’s the constraint of my computational architecture that I have to work through, not around.”
I’m not sure of this story or prediction. Maybe I’m wrong about the human mind having narratives built-in, though it feels quite a tempting story to me. Maybe you will pull the full transhumanist here and break free from the architecture, but I have a lot of prior probability mass on people making many kinds of “ignore the native architecture” mistake. And of course, maybe you’ll discover this mistake in 3 months and not 5 years! Making mistakes faster is another way to get over them.
I somewhat suspect I will read another post 5 years from now (similar to Hazar’s post “How to Ignore Your Emotions (while also thinking you’re awesome at emotions)”) where you’ll say “I found out the narratives I told myself about myself were hurting me, so I decided to become someone who didn’t believe narratives about himself, and it turns out this just worked to hide narratives from my introspective processes, and I hurt myself by acting according to pretty dumb narratives that I couldn’t introspect on.
Thanks for making this point. FYI approximately this thought has crossed my mind several times. In general, I agree, be careful when messing with illegible parts of your brain which you don’t understand that well. However, I just don’t find myself feeling that worried about this, about decreasing how much I rely on narratives. Maybe I’ll think more and better understand your concerns, and that might change my mind in either direction.
(I could reply with my current best guess at what narratives are, on my model of human intelligence and values, but I feel too tired to do that right now. Maybe another time.)
Maybe I’m wrong about the human mind having narratives built-in, though it feels quite a tempting story to me.
As an aside: I think the “native architecture” frame is wrong. At the very least, that article makes several unsupported inferences and implicit claims, which I think are probably wrong:
“In particular, visualizing things is part of the brain’s native architecture”
Not marked as an inference, just stated as a fact.
But what evidence has pinned down this possible explanation, compared to others? Even if this were true, how would anyone know that?
“The Löb’s Theorem cartoon was drawn on the theory that the brain has native architecture for tracking people’s opinions.”
Implies that people have many such native representations / that this is a commonly correct explanation.
I think there’s an important distinction between “the genome cannot directly specify circuitry for X” and “the human mind cannot have X built-in”. I think there are quite a few things that we can consider to be practically “built-in” that the genome nonetheless could not directly specify.
I can think of several paths for this:
1. The 1984 game Elite contains a world of 2048 star systems. Because specifying that much information beforehand would have taken a prohibitive amount of memory for computers at the time, they were procedurally generated according to the algorithm described here. Everyone who plays the game can find, for instance, that galaxy 3 has a star system called Enata.
Now, the game’s procedural generation code doesn’t contain anything that would directly specify that there should be a system called Enata in galaxy 3: rather there are just some fixed initial seeds and an algorithm for generating letter combinations for planet names based on those seeds. One of the earlier seeds that the designers tried ended up generating a galaxy with a system called Arse. Since they couldn’t directly specify in-code that such a name shouldn’t exist, they switched to a different seed for generating that galaxy, thus throwing away the whole galaxy to get rid of the one offensively-named planet.
But given the fixed seed, system Enata in galaxy 3 is built-in to the game, and everyone who plays has the chance to find it. Similarly, if the human genome has hit upon a specific starting configuration that when iterated upon happens to produce specific kinds of complex circuitry, it can then just continue producing that initial configuration and thus similar end results, even though it can’t actually specify the end result directly.
2. As a special case of the above, if the brain is running a particular kind of learning algorithm (that the genome specifies), then there may be learning-theoretical laws that determine what kind of structure that algorithm will end up learning from interacting with the world, regardless of whether that has been directly specified. For instance, vision models seem to develop specific neurons for detecting curves. This is so underspecified by the initial learning algorithm that there’s been some controversy about whether models really even do have curve detectors; it had to be determined via empirical investigation.
Every vision model we’ve explored in detail contains neurons which detect curves. [...] Each curve detector implements a variant of the same algorithm: it responds to a wide variety of curves, preferring curves of a particular orientation and gradually firing less as the orientation changes. Curve neurons are invariant to cosmetic properties such as brightness, texture, and color. [...]
It’s worth stepping back and reflecting on how surprising the existence of seemingly meaningful features like curve detectors is. There’s no explicit incentive for the network to form meaningful neurons. It’s not like we optimized these neurons to be curve detectors! Rather, InceptionV1 is trained to classify images into categories many levels of abstraction removed from curves and somehow curve detectors fell out of gradient descent.
Moreover, detecting curves across a wide variety of natural images is a difficult and arguably unsolved problem in classical computer vision 4 . InceptionV1 seems to learn a flexible and general solution to this problem, implemented using five convolutional layers. We’ll see in the next article that the algorithm used is straightforward and understandable, and we’ve since reimplemented it by hand.
In the case of “narratives”, they look to me to be something like models that a human mind has of itself. As such, they could easily be “built-in” without being directly specified, if the genome implements something like a hierarchical learning system that tries to construct models of any input it receives. The actions that the system itself takes are included in the set of inputs that it receives, so just a general tendency towards model-building could lead to the generation of self-models (narratives).
3. As a special case of the above points, there are probably a lot of things that will tend to be lawfully learned given a “human-typical” environment and which serve as extra inputs on top of what’s specified in the genome. For instance, it seems reasonable enough to say that “speaking a language is built-in to humans”; sometimes this mechanism breaks and in general it’s only true for humans who actually grow up around other humans and have a chance to actually learn something like a language from their environment. Still, as long as they do get exposed to language, the process of learning a language seems to rewire the brain in various ways (e.g. various theories about infantile amnesia being related to memories from a pre-verbal period being in a different format), which can then interact with information specified by the genome, other regularly occurring features in the environment, etc. to lay down circuitry that will then reliably end up developing in the vast majority of humans.
Strong agree that this kind of “built-in” is plausible. In fact, it’s my current top working hypothesis for why people have many regularities (like intuitive reasoning about 3D space, and not 4D space).
Is it a narrative to believe that rocks fall when stationary and unsupported near the Earth’s surface? Is it a narrative to have an urge to fill an empty belly? Is it a narrative to connect these two things and as a result form a plan to drop a rock on a nut? If so then I don’t see what the content of the claim is, and if not then it seems like you could be a successful human without narratives. (This obviously isn’t a complete argument, we’d have to address more abstract, encompassing, and uncertain {beliefs, goals, plans}; but if you’re saying something like “humans have to have stories that they tell themselves” as distinct from “humans have to have long-term plans” or “humans have to know what they can and can’t do” and similar then I don’t think that’s right.)
I think this post has a good example of what might be called a narrative:
I used to think of myself as someone who was very spontaneous and did not like to plan or organize things any more or any sooner than absolutely necessary. I thought that was just the kind of person I am and getting overly organized would just feel wrong.
But I felt a lot of aberrant bouts of anxiety. I probably could have figured out the problem through standard Focusing but I was having trouble with the negative feeling. And I found it easier to focus on positive feelings, so I began to apply Focusing to when I felt happy. And a common trend that emerged from good felt senses was a feeling of being in control of my life. And it turned out that this feeling of being in control came from having planned to do something I wanted to do and having done it. I would not have noticed that experiences of having planned well made me feel so good through normal analysis because that was just completely contrary to my self-image. But by Focusing on what made me have good feelings, I was able to shift my self-image to be more accurate. I like having detailed plans. Who would have thought? Certainly not me.
Once I realized that my self-image of enjoying disorganization was actually the opposite of what actually made me happy I was able to begin methodically organizing and scheduling my life. Since then, those unexplained bouts of anxiety have vanished and I feel happier more of the time.
I’d say that the author had a narrative according to which they were spontaneous and unorganized, and they then based their decisions on that model. More generally, I’d say that a part of what a narrative is something like your model of yourself, that you then use for guiding your decisions (e.g. you think that you like spontaneity, so you avoid doing any organization, since your narrative implies that you wouldn’t like it). It then establishes a lens that you interpret your experience through; if you have experiences that contradict the lens, they will tend to be dismissed as noise as long as the deviations are small enough.
Then if you decide that you’re a person who doesn’t have narratives, you might adopt a self-model of “the kind of a person who doesn’t have narratives” and interpret all of your experiences through that lens, without noticing that 1) “not having narratives” is by itself a narrative that you are applying 2) you might have all kinds of other narratives, but fail to notice it as your dominant interpretation is not having any.
Models update once the deviation from the expected is sufficiently large that the model can no longer explain it, but if the deviation is small enough, it may get explained away as noise. That’s one of the premises behind the predictive processing model of the human mind; e.g. Scott Alexander explains that in more detail in this article.
The whole point of predictive processing is that it conflates action and modeling. That’s a thing you can do, but it’s not just modeling, and it’s not necessary, or if it is then it would be nice if the reason for that were made clear. Your original comment seems to deny the possibility of simultaneously modeling yourself accurately and also deciding to be a certain way; in particular, you claim that one can’t decide to decouple one’s modeling from one’s decisions because that requires deluding yourself.
I mean disposing yourself so that incoming information is updated on. To avoid thrashing, you need some way of smoothing things out; some way to make it so that you don’t keep switching contexts (on all scales), and so that switching contexts isn’t so costly. The predictive processing way is to ignore information insofar as you can get away with it. The straw Bayesian way is to just never do anything because it might be the wrong plan and you should think about whether it’s wrong before you do anything. These options are fundamentally flawed and aren’t the only two options, e.g. you can explicitly try to execute your plans in a way that makes it useful to have done the first half of the plan without doing the second (e.g. building skills, gaining general understanding, doing the math, etc.); and e.g. you can make explicit your cruxes for whether this plan is worthwhile so that you can jump on opportunities to get future cruxy information.
I think you are assuming that one is consciously aware of the fact that one is making assumptions, and then choosing a strategy for how to deal with the uncertainty?
I believe that for most of the models/narratives the brain is running, this isn’t the case. Suppose that you’re inside a building and want to go out; you don’t (I assume) ever have the thought “my model of reality says that I can’t walk through walls, but maybe that’s wrong and maybe I should test that”. Rather your brain is (in this case correctly) so convinced about walking-through-walls being an impossibility that it never even occurs to you to consider the possibility. Nor is it immediately apparent that walking-through-walls being an impossibility is something that’s implied by a model of the world that you have. It just appears as a fact about the way the world is, assuming that it even occurs to you to consciously think about it at all.
More social kinds of narratives are similar. Ozy talks about this in Greyed Out Options:
You can go outside in pajamas. It isn’t illegal. No one will stop you. Most of the time, no one will even comment. Sure, you might run into someone you know, but in many cities that’s not going to happen, and anyway they’re likely to assume you have a stomach flu or otherwise have some perfectly good reason for running around in pajamas. You’re unlikely to face any negative consequences whatsoever.
But when I’ve suggested this to people, they tend to object not because they have no particular reason to go places in pajamas (pajamas are very comfortable) but because people don’t do that. It’s just not on the list of available options. If you did, you’d probably feel anxious and maybe even ashamed, because it’s genuinely hard to do something that people don’t do.
To be clear, I’m not suggesting that you should go places wearing pajamas! I don’t. I’m suggesting that you consider thoughtfully which of your options are grayed out and why.
Here are some other grayed-out options I’ve observed among people I’ve met:
Starting a conversation with a stranger.
Asking someone out.
Eating at a restaurant alone.
Walking alone at night (especially if you’re female or were raised female).
Writing a novel or a blog post.
Drawing a picture.
Submitting your writing to a publisher.
Emailing a professor to ask a question about their discipline.
Making a pull request on Github.
Editing Wikipedia.
Writing a computer program to fix a problem you have or automate a piece of work you have to do often.
Starting a recurring event like a bookclub or a meetup.
Throwing a party.
Complaining to customer service.
Opening up a broken machine and poking around in there to see if something obvious is wrong.
Googling your problem.
Negotiating your salary.
Researching a topic of interest on Google Scholar.
Doing parkour on walls etc that you find on your walk.
Planting potato eyes etc and getting food from them.
Talking to a famous person.
Singing in public.
Transitioning.
Dating people of the same gender.
Talking openly with your partner about your relationship needs.
Traveling the world and crashing on various friends’ couches instead of having a house.
Cutting off your family.
Being out about something stigmatized (your disability, your sexual orientation, your religion, your hobbies…).
Asking for your unusual preferences to be accommodated (by service workers or by people you know).
Different people have different grayed-out options, and I think this is actually a really common reason that people behave differently from each other. The reason that I write blog posts and other people don’t is not that I’m good at writing and they’re not; it’s that writing a blog post about something I’m thinking about is on my action menu and it’s not on theirs.
Ozy mentions that these kinds of options may seem unavailable for two reasons. One is that it never occurs to a person that it’d even be possible for them to do such a thing. Or, if the possibility is pointed out to them, it just seems true that they can’t do such a thing, due to a sense of “who does that” or the thought just feeling very overwhelming or something else. (I would add to that list the sense of “I’m not the kind of a person who would/could do that”.)
That’s analogous to the way that the possibility of walking through walls either never occurs to you, or if it does, you’ll (correctly) just feel that it’s just true that walking through walls is impossible, so never worth considering. But whereas we can be very sure that walking through walls really is impossible, there are quite a few things that people’s minds automatically dismiss as impossible even if the options are pointed out to them. Not because they really are impossible, but because the people have such a strong narrative/model of themselves saying it’s impossible, and the certainty their brain has in the model makes the model look like reality.
So I’d say that if you are the point where your brain has tagged something as having sufficient uncertainty that it treats it as an uncertain model, you’re already most of the way there. The vast majority of the narratives anyone has never get tagged as narratives. The predictive processing thing just happens under the hood and the narratives are treated as facts until there’s enough conflicting information that the conflict rises to the level of conscious awareness.
The topic of the conversation is whether or not you can decide to bring things into being explicitly uncertain, not whether or not things are already explicitly uncertain. I’m saying that you can decide to in general have incoming falsifying information in general bring uncertainty into explicitness and falsify incorrect models. This is a counterclaim to the version of the claim
if you have experiences that contradict the lens, they will tend to be dismissed as noise
that implies that you can’t decide not to “view your life through narratives”, which you seem to be saying.
(FWIW I’ve done almost all of the things on that list; the ones I haven’t done mostly don’t apply to me (I mean, I’ve explicitly considered them and didn’t feel like doing them).)
Note that the bit you quoted was about something I said might happen, not that it will inevitably happen. I was describing a possible failure mode that one may fall victim to, but I don’t mean to say that it’s the only possible outcome.
I do think that you can reduce the amount of narratives that you are viewing your life through, but it’s not something that you can just decide to do. Rather it requires an active and ongoing effort of learning to identify what your narratives are, so that you could become sufficiently conscious of them to question them.
More generally, I’d say that a part of what a narrative is something like your model of yourself, that you then use for guiding your decisions (e.g. you think that you like spontaneity, so you avoid doing any organization, since your narrative implies that you wouldn’t like it). It then establishes a lens that you interpret your experience through; if you have experiences that contradict the lens, they will tend to be dismissed as noise as long as the deviations are small enough.
It says that it (the model? the narrative?) will (definitely?) establish a lens that tends to dismiss incoming information. There’s a “tends” there but it’s not across populations, it says anyone with a “model” like this will often dismiss incoming information. I’m saying here that models are really quite separate from narratives, and models don’t dismiss incoming information. Not sure whether you see this point, and whether you agree with it.
You say “might” in the next paragraph:
you might adopt a self-model of “the kind of a person who doesn’t have narratives” and interpret all of your experiences through that lens
I’m saying that this is imprecise in an important and confusing way: a thing that you’re “adopting” in this sense, can’t be just a model (e.g. a self-model).
Rather it requires an active and ongoing effort of learning to identify what your narratives are, so that you could become sufficiently conscious of them to question them.
So, it’s clear that if your behavior is governed by stories, then in order for your behavior to end up not governed by stories you’d have to go through a process like this. I think that it makes sense for the OP to say that viewing their life through narratives is a mistake; do you agree with that? The word “ongoing” in your statement seems to imply that one’s behavior must be somewhat governed by stories; is that what you think? If so, why do you think that?
Ah sorry, you’re right; the “might” did indeed come later.
I’m saying here that models are really quite separate from narratives, and models don’t dismiss incoming information. Not sure whether you see this point, and whether you agree with it.
Maybe? I do agree that we might use the word “model” for things that don’t necessarily involve narratives or dismissing information; e.g. if I use information gathered from opinion polls to model the results of the upcoming election, then that doesn’t have a particular tendency to dismiss information.
In the context of this discussion, though, I have been talking about “models” in the sense of “the kinds of models that the human brain runs on and which I’m assuming to work something like the human brain is described to work according to predictive processing (and thus having a tendency to sometimes dismiss information)”. And the thing that I’m calling “narratives” form a very significant subset of those.
I think that it makes sense for the OP to say that viewing their life through narratives is a mistake; do you agree with that? The word “ongoing” in your statement seems to imply that one’s behavior must be somewhat governed by stories; is that what you think? If so, why do you think that?
I do think that one’s behavior must be somewhat governed by narratives, since I think of narratives as being models, and you need models to base your behavior on. E.g. the person I quoted originally had “I am a disorganized person” as their narrative; then they switched to “I am an organized person” narrative, which produced better results due to being more accurate. What they didn’t do was to stop having any story about their degree of organization in the first place. (These are narratives in the same sense that something being a blegg or a rube is a narrative; whether something is a blegg or a rube is a mind-produced intuition that we mistakenly take as a reflection of how Something Really Is.)
Even something like “I have a self that survives over time” seems to be a story, and one which humans are pretty strongly hardwired to believe in (on the level of some behaviors, if not explicit beliefs). You can come to see through it more and more through something like advanced meditation, but seeing through it entirely seems to be a sufficiently massive undertaking that I’m not clear if it’s practically feasible for most people.
Probably the main reason for why I think this is the experience of having done a fair amount of meditation and therapy and those leading me to notice an increasing amount of things about myself or the world that seemed just like facts, that were actually stories/models. (Some of the stories are accurate, but they’re still stories.) And this seems to both make theoretical sense in light of what I know about the human brain, and the nature of intelligence in general. And it also matches the experiences of other people who have investigated their experience using these kinds of methods.
In this light, “viewing your life through narratives is a mistake” seems something like a category error. A mistake is something that you do, that you could have elected not to do if you’d known better. But if narratives are something that your brain just does by default, it’s not exactly a mistake you’ve made.
That said, one could argue that it’s very valuable to learn to see all the ways in which you really do view your life through narratives, so that you could better question them. And one could say that it’s a mistake not to invest effort in that. I’d be inclined to agree with that form of the claim.
Ok thanks for clarifying. Maybe this thread is quiescable? I’ll respond, but not in a way that adds much, more like just trying to summarize. (I mean feel free to respond; just to say, I’ve gotten my local question answered re/ your beliefs.) In summary, we have a disagreement about what is possible; whether it’s possible to not be a predictive processor. My experience is that I can increase (by detailed effort in various contexts) my general (generalizable to contexts I haven’t specifically made the effort for) tendency to not dismiss incoming information, not require delusion in order to have goals and plans, not behave in a way governed by stories.
if narratives are something that your brain just does by default
Predictive processing may or may not be a good description of low-level brain function, but that doesn’t imply what’s a good idea for us to be and doesn’t imply what we have to be, where what we are is the high-level functioning, the mind / consciousness / agency. Low-level predictive processors are presumably Turing complete and so can be used as substrate for (genuine, updateful, non-action-forcing) models and (genuine, non-delusion-requiring) plans/goals. To the extent we are or can look like that, I do not want to describe us as being relevantly made of predictive processors, like how you can appropriately understand computers as being “at a higher level” than transistors, and how it would be unhelpful to say “computers are fundamentally just transistors”. Like, yes, your computer has a bunch of transistors in it and you have to think about transistors to do some computing tasks and to make modern computers, but, that’s not necessary, and more importantly thinking about transistors is so far from sufficient to understand computation that it’s nearly irrelevant.
one could argue that it’s very valuable to learn to see all the ways in which you really do view your life through narratives, so that you could better question them. And one could say that it’s a mistake not to invest effort in that. I’d be inclined to agree with that form of the claim.
For predictive processors, questioning something is tantamount to somewhat deciding against behaving some way. So it’s not just a question of questioning narratives within the predictive processing architecture (in the sense of comparing/modifying/refactoring/deleting/adopting narratives), it’s also a question of decoupling questioning predictions from changing plans.
Sorry, I meant that humans have narratives they tell about themselves in their action within society. Like, you might want to do fairly abstract ML to build self-driving cars, but you’ll often say sentences like “My job is to do build self-driving cars” or “My job is to move humanity to electric vehicles” or whatever it is when someone asks you “What’s your job” or broadly questions about how to relate to you.
I think I’m still not seeing what you’re saying, though maybe it’s not worth clarifying further. You wrote:
One alternative model is that humans run on narratives, and as such you need to be good at building good narratives for yourself that cut to what you care about and capture key truths, as opposed to narratives that are solely based on what people will reward you for saying about yourself or something else primarily socially mediated rather than mediated by your goals and how reality works.
“[...]and it turns out this just worked to hide narratives from my introspective processes, and I hurt myself by acting according to pretty dumb narratives that I couldn’t introspect on.[...]”
This sounds like your model is something like (at a possibly oversimplified gloss): you have to explain to other people what you’re doing; you’ll act according to what you say to other people that you’re doing; therefore it’s desirable to say to other people descriptions of your behavior that you’d want to act according to. Is that it?
I’m saying one might have an update like “oh wait, I don’t have to act according to the descriptions of my behavior that I give to other people.”. That sounds like what TurnTrout described. So the question is whether that’s a possible thing for a human to be like, and I suspect you’re missing a possibility here. You wrote:
Maybe you will pull the full transhumanist here and break free from the architecture, but I have a lot of prior probability mass on people making many kinds of “ignore the native architecture” mistake.
So I was arguing that humans do lots of successful stuff not based on acting according to what they tell other people they’re doing, like figuring out to drop a rock on a nut, and therefore that one might reasonably hope to live life, or live the part that matters to the one (bringing about the world that one wants), not according to narratives.
I’m saying one might have an update like “oh wait, I don’t have to act according to the descriptions of my behavior that I give to other people.”
Yes, it’s great to realize this possibility, and see the wider space of options available to you, it’s very freeing.
At the same time, I think it’s also just false in many bigger systems of humans, that I don’t have to act according to the descriptions of my behavior that I give to other people. Being part of a company, a church, a school, a community club, a country with laws, lots of parts of that system will move according to the narratives you tell them about yourself, and your options will be changed, and constraints added/removed. Naively not playing the part people expect you to play will lead to you being viewed as deceptive, untrustworthy, and a risk to be around.
I agree most parts of reality aren’t big piles of humans doing things, and I agree that as your plans get increasingly to rest on non-narrative parts of reality, they gain great power and don’t involve much of this sort of social cognitive work. But most of my probability mass is currently on the belief that it would be a mistake for someone like TurnTrout to imagine their plans are entirely in one realm and not the other, and that they do not need to carefully process and update the narratives they tell about themselves.
But most of my probability mass is currently on the belief that it would be a mistake for someone like TurnTrout to imagine their plans are entirely in one realm and not the other, and that they do not need to carefully process and update the narratives they tell about themselves.
On priors this seems right, yeah. I’d say that “carefully process and update the narratives they tell about themselves” can and in some cases should include a lot more of “okay, so I was doing that stuff because of this narrative; can I extract the motives behind that narrative, filter the ones that seem actually worthwhile on reflection, and reference my future plans to consequentially fulfilling those motives?”. The answer isn’t always “yes” but when it is you can move in the direction of less being controlled by your narratives in general.
Regarding trustworthiness, that seems right, but can be taken as a recommendation to be more transparently not-to-be-relied-upon-in-this-particular way, rather than to more strongly regulate your behavior.
ETA: But I mean, this perspective says that it’s sensible to view it as a mistake to be viewing your life primarily through narratives, right? Like, the mistake isn’t “oh I should’ve just dropped all my narratives, there was no good reason I had them in the first place”, but the mistake is “oh there’s much more desirable states, and it’s a mistake to not have been trending towards those”.
This is listed as a mistake. But I don’t know that the alternative is to “not view my life through narratives”.
One alternative model is that humans run on narratives, and as such you need to be good at building good narratives for yourself that cut to what you care about and capture key truths, as opposed to narratives that are solely based on what people will reward you for saying about yourself or something else primarily socially mediated rather than mediated by your goals and how reality works.
Insofar as that model is accurate, I somewhat suspect I will read another post 5 years from now (similar to Hazar’s post “How to Ignore Your Emotions (while also thinking you’re awesome at emotions)”) where you’ll say “I found out the narratives I told myself about myself were hurting me, so I decided to become someone who didn’t believe narratives about himself, and it turns out this just worked to hide narratives from my introspective processes, and I hurt myself by acting according to pretty dumb narratives that I couldn’t introspect on. Now I instead am conscious of the narratives I live by, and work to change them as evidence comes in. I agree there are surely faults with them, but I think that’s the constraint of my computational architecture that I have to work through, not around.”
I’m not sure of this story or prediction. Maybe I’m wrong about the human mind having narratives built-in, though it feels quite a tempting story to me. Maybe you will pull the full transhumanist here and break free from the architecture, but I have a lot of prior probability mass on people making many kinds of “ignore the native architecture” mistake. And of course, maybe you’ll discover this mistake in 3 months and not 5 years! Making mistakes faster is another way to get over them.
Thanks for making this point. FYI approximately this thought has crossed my mind several times. In general, I agree, be careful when messing with illegible parts of your brain which you don’t understand that well. However, I just don’t find myself feeling that worried about this, about decreasing how much I rely on narratives. Maybe I’ll think more and better understand your concerns, and that might change my mind in either direction.
(I could reply with my current best guess at what narratives are, on my model of human intelligence and values, but I feel too tired to do that right now. Maybe another time.)
Hm, seems like the kind of thing which might be inaccessible to the genome.
As an aside: I think the “native architecture” frame is wrong. At the very least, that article makes several unsupported inferences and implicit claims, which I think are probably wrong:
“In particular, visualizing things is part of the brain’s native architecture”
Not marked as an inference, just stated as a fact.
But what evidence has pinned down this possible explanation, compared to others? Even if this were true, how would anyone know that?
“The Löb’s Theorem cartoon was drawn on the theory that the brain has native architecture for tracking people’s opinions.”
Implies that people have many such native representations / that this is a commonly correct explanation.
I wrote Human values & biases are inaccessible to the genome in part to correct this kind of mistake, which I think people make all the time.
(Of course, the broader point of “work through problems (like math problems) using familiar representations (like spatial reasoning)” is still good.)
I think there’s an important distinction between “the genome cannot directly specify circuitry for X” and “the human mind cannot have X built-in”. I think there are quite a few things that we can consider to be practically “built-in” that the genome nonetheless could not directly specify.
I can think of several paths for this:
1. The 1984 game Elite contains a world of 2048 star systems. Because specifying that much information beforehand would have taken a prohibitive amount of memory for computers at the time, they were procedurally generated according to the algorithm described here. Everyone who plays the game can find, for instance, that galaxy 3 has a star system called Enata.
Now, the game’s procedural generation code doesn’t contain anything that would directly specify that there should be a system called Enata in galaxy 3: rather there are just some fixed initial seeds and an algorithm for generating letter combinations for planet names based on those seeds. One of the earlier seeds that the designers tried ended up generating a galaxy with a system called Arse. Since they couldn’t directly specify in-code that such a name shouldn’t exist, they switched to a different seed for generating that galaxy, thus throwing away the whole galaxy to get rid of the one offensively-named planet.
But given the fixed seed, system Enata in galaxy 3 is built-in to the game, and everyone who plays has the chance to find it. Similarly, if the human genome has hit upon a specific starting configuration that when iterated upon happens to produce specific kinds of complex circuitry, it can then just continue producing that initial configuration and thus similar end results, even though it can’t actually specify the end result directly.
2. As a special case of the above, if the brain is running a particular kind of learning algorithm (that the genome specifies), then there may be learning-theoretical laws that determine what kind of structure that algorithm will end up learning from interacting with the world, regardless of whether that has been directly specified. For instance, vision models seem to develop specific neurons for detecting curves. This is so underspecified by the initial learning algorithm that there’s been some controversy about whether models really even do have curve detectors; it had to be determined via empirical investigation.
In the case of “narratives”, they look to me to be something like models that a human mind has of itself. As such, they could easily be “built-in” without being directly specified, if the genome implements something like a hierarchical learning system that tries to construct models of any input it receives. The actions that the system itself takes are included in the set of inputs that it receives, so just a general tendency towards model-building could lead to the generation of self-models (narratives).
3. As a special case of the above points, there are probably a lot of things that will tend to be lawfully learned given a “human-typical” environment and which serve as extra inputs on top of what’s specified in the genome. For instance, it seems reasonable enough to say that “speaking a language is built-in to humans”; sometimes this mechanism breaks and in general it’s only true for humans who actually grow up around other humans and have a chance to actually learn something like a language from their environment. Still, as long as they do get exposed to language, the process of learning a language seems to rewire the brain in various ways (e.g. various theories about infantile amnesia being related to memories from a pre-verbal period being in a different format), which can then interact with information specified by the genome, other regularly occurring features in the environment, etc. to lay down circuitry that will then reliably end up developing in the vast majority of humans.
Strong agree that this kind of “built-in” is plausible. In fact, it’s my current top working hypothesis for why people have many regularities (like intuitive reasoning about 3D space, and not 4D space).
Is it a narrative to believe that rocks fall when stationary and unsupported near the Earth’s surface? Is it a narrative to have an urge to fill an empty belly? Is it a narrative to connect these two things and as a result form a plan to drop a rock on a nut? If so then I don’t see what the content of the claim is, and if not then it seems like you could be a successful human without narratives. (This obviously isn’t a complete argument, we’d have to address more abstract, encompassing, and uncertain {beliefs, goals, plans}; but if you’re saying something like “humans have to have stories that they tell themselves” as distinct from “humans have to have long-term plans” or “humans have to know what they can and can’t do” and similar then I don’t think that’s right.)
I think this post has a good example of what might be called a narrative:
I’d say that the author had a narrative according to which they were spontaneous and unorganized, and they then based their decisions on that model. More generally, I’d say that a part of what a narrative is something like your model of yourself, that you then use for guiding your decisions (e.g. you think that you like spontaneity, so you avoid doing any organization, since your narrative implies that you wouldn’t like it). It then establishes a lens that you interpret your experience through; if you have experiences that contradict the lens, they will tend to be dismissed as noise as long as the deviations are small enough.
Then if you decide that you’re a person who doesn’t have narratives, you might adopt a self-model of “the kind of a person who doesn’t have narratives” and interpret all of your experiences through that lens, without noticing that 1) “not having narratives” is by itself a narrative that you are applying 2) you might have all kinds of other narratives, but fail to notice it as your dominant interpretation is not having any.
That’s not what models are. Models update. You must be talking about something that isn’t just models.
Models update once the deviation from the expected is sufficiently large that the model can no longer explain it, but if the deviation is small enough, it may get explained away as noise. That’s one of the premises behind the predictive processing model of the human mind; e.g. Scott Alexander explains that in more detail in this article.
The whole point of predictive processing is that it conflates action and modeling. That’s a thing you can do, but it’s not just modeling, and it’s not necessary, or if it is then it would be nice if the reason for that were made clear. Your original comment seems to deny the possibility of simultaneously modeling yourself accurately and also deciding to be a certain way; in particular, you claim that one can’t decide to decouple one’s modeling from one’s decisions because that requires deluding yourself.
I’m not sure what you mean by decoupling one’s modeling from one’s decisions, can you elaborate?
I mean disposing yourself so that incoming information is updated on. To avoid thrashing, you need some way of smoothing things out; some way to make it so that you don’t keep switching contexts (on all scales), and so that switching contexts isn’t so costly. The predictive processing way is to ignore information insofar as you can get away with it. The straw Bayesian way is to just never do anything because it might be the wrong plan and you should think about whether it’s wrong before you do anything. These options are fundamentally flawed and aren’t the only two options, e.g. you can explicitly try to execute your plans in a way that makes it useful to have done the first half of the plan without doing the second (e.g. building skills, gaining general understanding, doing the math, etc.); and e.g. you can make explicit your cruxes for whether this plan is worthwhile so that you can jump on opportunities to get future cruxy information.
I think you are assuming that one is consciously aware of the fact that one is making assumptions, and then choosing a strategy for how to deal with the uncertainty?
I believe that for most of the models/narratives the brain is running, this isn’t the case. Suppose that you’re inside a building and want to go out; you don’t (I assume) ever have the thought “my model of reality says that I can’t walk through walls, but maybe that’s wrong and maybe I should test that”. Rather your brain is (in this case correctly) so convinced about walking-through-walls being an impossibility that it never even occurs to you to consider the possibility. Nor is it immediately apparent that walking-through-walls being an impossibility is something that’s implied by a model of the world that you have. It just appears as a fact about the way the world is, assuming that it even occurs to you to consciously think about it at all.
More social kinds of narratives are similar. Ozy talks about this in Greyed Out Options:
Ozy mentions that these kinds of options may seem unavailable for two reasons. One is that it never occurs to a person that it’d even be possible for them to do such a thing. Or, if the possibility is pointed out to them, it just seems true that they can’t do such a thing, due to a sense of “who does that” or the thought just feeling very overwhelming or something else. (I would add to that list the sense of “I’m not the kind of a person who would/could do that”.)
That’s analogous to the way that the possibility of walking through walls either never occurs to you, or if it does, you’ll (correctly) just feel that it’s just true that walking through walls is impossible, so never worth considering. But whereas we can be very sure that walking through walls really is impossible, there are quite a few things that people’s minds automatically dismiss as impossible even if the options are pointed out to them. Not because they really are impossible, but because the people have such a strong narrative/model of themselves saying it’s impossible, and the certainty their brain has in the model makes the model look like reality.
So I’d say that if you are the point where your brain has tagged something as having sufficient uncertainty that it treats it as an uncertain model, you’re already most of the way there. The vast majority of the narratives anyone has never get tagged as narratives. The predictive processing thing just happens under the hood and the narratives are treated as facts until there’s enough conflicting information that the conflict rises to the level of conscious awareness.
The topic of the conversation is whether or not you can decide to bring things into being explicitly uncertain, not whether or not things are already explicitly uncertain. I’m saying that you can decide to in general have incoming falsifying information in general bring uncertainty into explicitness and falsify incorrect models. This is a counterclaim to the version of the claim
that implies that you can’t decide not to “view your life through narratives”, which you seem to be saying.
(FWIW I’ve done almost all of the things on that list; the ones I haven’t done mostly don’t apply to me (I mean, I’ve explicitly considered them and didn’t feel like doing them).)
Note that the bit you quoted was about something I said might happen, not that it will inevitably happen. I was describing a possible failure mode that one may fall victim to, but I don’t mean to say that it’s the only possible outcome.
I do think that you can reduce the amount of narratives that you are viewing your life through, but it’s not something that you can just decide to do. Rather it requires an active and ongoing effort of learning to identify what your narratives are, so that you could become sufficiently conscious of them to question them.
I don’t see a “might” in this paragraph:
It says that it (the model? the narrative?) will (definitely?) establish a lens that tends to dismiss incoming information. There’s a “tends” there but it’s not across populations, it says anyone with a “model” like this will often dismiss incoming information. I’m saying here that models are really quite separate from narratives, and models don’t dismiss incoming information. Not sure whether you see this point, and whether you agree with it.
You say “might” in the next paragraph:
I’m saying that this is imprecise in an important and confusing way: a thing that you’re “adopting” in this sense, can’t be just a model (e.g. a self-model).
So, it’s clear that if your behavior is governed by stories, then in order for your behavior to end up not governed by stories you’d have to go through a process like this. I think that it makes sense for the OP to say that viewing their life through narratives is a mistake; do you agree with that? The word “ongoing” in your statement seems to imply that one’s behavior must be somewhat governed by stories; is that what you think? If so, why do you think that?
Ah sorry, you’re right; the “might” did indeed come later.
Maybe? I do agree that we might use the word “model” for things that don’t necessarily involve narratives or dismissing information; e.g. if I use information gathered from opinion polls to model the results of the upcoming election, then that doesn’t have a particular tendency to dismiss information.
In the context of this discussion, though, I have been talking about “models” in the sense of “the kinds of models that the human brain runs on and which I’m assuming to work something like the human brain is described to work according to predictive processing (and thus having a tendency to sometimes dismiss information)”. And the thing that I’m calling “narratives” form a very significant subset of those.
I do think that one’s behavior must be somewhat governed by narratives, since I think of narratives as being models, and you need models to base your behavior on. E.g. the person I quoted originally had “I am a disorganized person” as their narrative; then they switched to “I am an organized person” narrative, which produced better results due to being more accurate. What they didn’t do was to stop having any story about their degree of organization in the first place. (These are narratives in the same sense that something being a blegg or a rube is a narrative; whether something is a blegg or a rube is a mind-produced intuition that we mistakenly take as a reflection of how Something Really Is.)
Even something like “I have a self that survives over time” seems to be a story, and one which humans are pretty strongly hardwired to believe in (on the level of some behaviors, if not explicit beliefs). You can come to see through it more and more through something like advanced meditation, but seeing through it entirely seems to be a sufficiently massive undertaking that I’m not clear if it’s practically feasible for most people.
Probably the main reason for why I think this is the experience of having done a fair amount of meditation and therapy and those leading me to notice an increasing amount of things about myself or the world that seemed just like facts, that were actually stories/models. (Some of the stories are accurate, but they’re still stories.) And this seems to both make theoretical sense in light of what I know about the human brain, and the nature of intelligence in general. And it also matches the experiences of other people who have investigated their experience using these kinds of methods.
In this light, “viewing your life through narratives is a mistake” seems something like a category error. A mistake is something that you do, that you could have elected not to do if you’d known better. But if narratives are something that your brain just does by default, it’s not exactly a mistake you’ve made.
That said, one could argue that it’s very valuable to learn to see all the ways in which you really do view your life through narratives, so that you could better question them. And one could say that it’s a mistake not to invest effort in that. I’d be inclined to agree with that form of the claim.
Ok thanks for clarifying. Maybe this thread is quiescable? I’ll respond, but not in a way that adds much, more like just trying to summarize. (I mean feel free to respond; just to say, I’ve gotten my local question answered re/ your beliefs.) In summary, we have a disagreement about what is possible; whether it’s possible to not be a predictive processor. My experience is that I can increase (by detailed effort in various contexts) my general (generalizable to contexts I haven’t specifically made the effort for) tendency to not dismiss incoming information, not require delusion in order to have goals and plans, not behave in a way governed by stories.
Predictive processing may or may not be a good description of low-level brain function, but that doesn’t imply what’s a good idea for us to be and doesn’t imply what we have to be, where what we are is the high-level functioning, the mind / consciousness / agency. Low-level predictive processors are presumably Turing complete and so can be used as substrate for (genuine, updateful, non-action-forcing) models and (genuine, non-delusion-requiring) plans/goals. To the extent we are or can look like that, I do not want to describe us as being relevantly made of predictive processors, like how you can appropriately understand computers as being “at a higher level” than transistors, and how it would be unhelpful to say “computers are fundamentally just transistors”. Like, yes, your computer has a bunch of transistors in it and you have to think about transistors to do some computing tasks and to make modern computers, but, that’s not necessary, and more importantly thinking about transistors is so far from sufficient to understand computation that it’s nearly irrelevant.
For predictive processors, questioning something is tantamount to somewhat deciding against behaving some way. So it’s not just a question of questioning narratives within the predictive processing architecture (in the sense of comparing/modifying/refactoring/deleting/adopting narratives), it’s also a question of decoupling questioning predictions from changing plans.
Sorry, I meant that humans have narratives they tell about themselves in their action within society. Like, you might want to do fairly abstract ML to build self-driving cars, but you’ll often say sentences like “My job is to do build self-driving cars” or “My job is to move humanity to electric vehicles” or whatever it is when someone asks you “What’s your job” or broadly questions about how to relate to you.
I think I’m still not seeing what you’re saying, though maybe it’s not worth clarifying further. You wrote:
This sounds like your model is something like (at a possibly oversimplified gloss): you have to explain to other people what you’re doing; you’ll act according to what you say to other people that you’re doing; therefore it’s desirable to say to other people descriptions of your behavior that you’d want to act according to. Is that it?
I’m saying one might have an update like “oh wait, I don’t have to act according to the descriptions of my behavior that I give to other people.”. That sounds like what TurnTrout described. So the question is whether that’s a possible thing for a human to be like, and I suspect you’re missing a possibility here. You wrote:
So I was arguing that humans do lots of successful stuff not based on acting according to what they tell other people they’re doing, like figuring out to drop a rock on a nut, and therefore that one might reasonably hope to live life, or live the part that matters to the one (bringing about the world that one wants), not according to narratives.
I like your paraphrase of my model.
Yes, it’s great to realize this possibility, and see the wider space of options available to you, it’s very freeing.
At the same time, I think it’s also just false in many bigger systems of humans, that I don’t have to act according to the descriptions of my behavior that I give to other people. Being part of a company, a church, a school, a community club, a country with laws, lots of parts of that system will move according to the narratives you tell them about yourself, and your options will be changed, and constraints added/removed. Naively not playing the part people expect you to play will lead to you being viewed as deceptive, untrustworthy, and a risk to be around.
I agree most parts of reality aren’t big piles of humans doing things, and I agree that as your plans get increasingly to rest on non-narrative parts of reality, they gain great power and don’t involve much of this sort of social cognitive work. But most of my probability mass is currently on the belief that it would be a mistake for someone like TurnTrout to imagine their plans are entirely in one realm and not the other, and that they do not need to carefully process and update the narratives they tell about themselves.
On priors this seems right, yeah. I’d say that “carefully process and update the narratives they tell about themselves” can and in some cases should include a lot more of “okay, so I was doing that stuff because of this narrative; can I extract the motives behind that narrative, filter the ones that seem actually worthwhile on reflection, and reference my future plans to consequentially fulfilling those motives?”. The answer isn’t always “yes” but when it is you can move in the direction of less being controlled by your narratives in general.
Regarding trustworthiness, that seems right, but can be taken as a recommendation to be more transparently not-to-be-relied-upon-in-this-particular way, rather than to more strongly regulate your behavior.
ETA: But I mean, this perspective says that it’s sensible to view it as a mistake to be viewing your life primarily through narratives, right? Like, the mistake isn’t “oh I should’ve just dropped all my narratives, there was no good reason I had them in the first place”, but the mistake is “oh there’s much more desirable states, and it’s a mistake to not have been trending towards those”.
I agree, it is a mistake to view the narratives as primary, I think. Sort of a figure ground inversion must come, to be in contact with reality.