This deep psychological need to latch onto some story, any story, to explain what we don’t understand, seems to me to tie back in to the Bayesian Brain Hypothesis. Basically, our brains are constantly and uncontrollably generating hypotheses for the evidence we encounter in the world, seeing which ones predict our experiences with the greatest likelihood (weighted by our biological and cultural priors, of course). These hypotheses come in the form of stories because stories have the minimum level of internal complexity to explain the complex phenomena we experience (which, themselves, we internalize as stories). Choosing the “best” explanation, of course, follows Bayes’ formula:
P(h|e)=P(e|h)P(h)∑iP(e|hi)P(hi)
A few problems with this:
We might just be terrible at choosing good priors (P(h)). Occam’s Razor / Solomonoff Induction just isn’t that intuitive to most humans. Most people find consciousness (which is familiar) to be simpler than neuroscience (which is alien), so they see no problem hypothesizing disembodied spirits, yet they scoff at the idea of humans being no more than matter. Astrology sounds reasonable when you have no reason to think that stars and planets shouldn’t have personalities and try to affect your personal life, like everyone else, just so long as you don’t try to figure out how that would actually work at a mechanistic level. Statistical modeling, on the other hand, is hard for humans to grasp, and therefore much more complicated, and therefore much less likely to have any explanatory power a priori, at least as far as most people are concerned.
Likelihood functions (P(e|h)) can be really hard to figure out. They require coming up with hypotheses that have the same causal structure as the real system they’re trying to predict. When most of our declarative mental models exist at the level of abstraction of human social dynamics, it can be difficult to accurately imagine all the interacting bodily systems and metabolic pathways that make NSAIDs (or any other drugs, to say nothing of whole foods) have the precise effect that they do.
Unfortunately, evolution didn’t equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of. That means the denominator in the equation above (∑iP(e|hi)P(hi)) is often much less than it should be, even if the priors and evidential likelihoods are all correct, because other hypotheses have not had a chance to weigh in. For most people, all future (or as-yet unheard-of) scientific discoveries are effectively given a prior probability of 0, while all the myths passed down from the tribal/religious/political elders seem to explain everything as well as anything they’ve ever heard, and so those stories get all the weight and all the acceptance.
It’s unavoidable for us as humans with Bayesian-ish brains to start coming up with stories to explain phenomena, even when evidence is lacking. We just need to be careful to cultivate an awareness for when our priors may be mistaken, for when our stories don’t have sufficiently reductionist internal causal structure to explain what they are meant to explain, and for when we probably haven’t even considered hypotheses that are anywhere close to the true explanation.
I am eager to explore your answer. Why do you think that “stories have the minimum level of internal complexity to explain the complex phenomena we experience”? Is it only because you suppose we internalize phenomena as stories? Do you have any data or studies on that? What’s your understanding of a story? Isn’t a straightforward description not even less complex because you do not need a full-blown plot to depict something like a chair?
I notice that while a lot of the answer is formal and well-grounded, “stories have the minimum level of internal complexity to explain the complex phenomena we experience” is itself a story :)
Personally, I would say that any gear-level model will have gaps in the understanding, and trying to fill these gaps will require extra modeling which also has gaps, and so on forever. My guess is that part of our brain will constantly try to find the answers and fill the holes, like a small child asking “why x? …and why y?”. So if a more practical part of us wants to stop investigating, it plugs the holes with fuzzy stories which sound like understanding.
Obviously, this is also a story, so discount it accordingly...
I notice that while a lot of the answer is formal and well-grounded, “stories have the minimum level of internal complexity to explain the complex phenomena we experience” is itself a story :)
Yep. That’s just how humans think about it: complex phenomena require complex explanations. “Emergence,” as complexity arising from the many simple interactions of many simple components, I think is a pretty recent concept for humanity. People still think intelligent design makes more intuitive sense than evolution, for instance, even though the latter makes astronomically fewer assumptions and should be favored a priori by Occam’s Razor.
By “story,” I mean something like a causal/conceptual map of an event/system/phenomenon, including things like the who, what, when, where, why, and how. At the level of sentences, this would be a map of all the words according to their semantic/syntactic role, like part of speech, with different slots for each role and connections relating them together. At the level of what we would normally call “stories,” such a story map would include slots for things like protagonist, antagonist, quest, conflict, plot points, and archetypes, along with their various interactions.
In the brain, these story maps/graphs could be implemented as regions of the cortex. Just as some cortical regions have retinotopic or somatotopic maps, more abstract regions may contain maps of conceptual space, along with neural connections between subregions that represent causal, structural, semantic, or social relationships between items in the map. Other brain regions may learn how to traverse these maps in systematic ways, giving rise to things like syntax, story structure, and action planning.
I’ve suggested before (https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/?commentId=PHYKtp7ACkoMf6hLe) that I think these sorts of maps may be key to understanding things like language and consciousness. Stories that can be loaded into and from long-term memory or transferred between minds via language can offer a huge selective advantage, both to individual humans and to groups of humans. I think the recogition, accumulation, and transmission of stories is actually pretty fundamental to how human psychology works.
Thank you for explaining it. I really like this concept for stories because it focuses on the psychological aspect of stories as understanding something which sometimes is missing in literary perspectives. How would you differentiate between a personal understanding of a definition and a story? Would you?
My main approach to stories is to define them more abstractly as a rhetorical device for representing change. This allows me to differentiatie between a story (changes), a description (states) and an argument (logical connections of assertions). I suppose, in your understanding, all of them would be some kind of story? This differentiation could also be helpful in understanding the process of telling a story versus giving a description.
Unfortunately, you did not explain how your answer relates to “stories have the minimum level of internal complexity to explain the complex phenomena we experience”. In your answer you do not compare stories to other ways of encoding information in the brain. Are there any others, in your opinion?
Unfortunately, evolution didn’t equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of
..and any effort to push against that, to assign more probability to the unknown hypotheses, is an effort in the direction of modest epistemology.
This deep psychological need to latch onto some story, any story, to explain what we don’t understand, seems to me to tie back in to the Bayesian Brain Hypothesis. Basically, our brains are constantly and uncontrollably generating hypotheses for the evidence we encounter in the world, seeing which ones predict our experiences with the greatest likelihood (weighted by our biological and cultural priors, of course). These hypotheses come in the form of stories because stories have the minimum level of internal complexity to explain the complex phenomena we experience (which, themselves, we internalize as stories). Choosing the “best” explanation, of course, follows Bayes’ formula:
P(h|e)=P(e|h)P(h)∑iP(e|hi)P(hi)A few problems with this:
We might just be terrible at choosing good priors (P(h)). Occam’s Razor / Solomonoff Induction just isn’t that intuitive to most humans. Most people find consciousness (which is familiar) to be simpler than neuroscience (which is alien), so they see no problem hypothesizing disembodied spirits, yet they scoff at the idea of humans being no more than matter. Astrology sounds reasonable when you have no reason to think that stars and planets shouldn’t have personalities and try to affect your personal life, like everyone else, just so long as you don’t try to figure out how that would actually work at a mechanistic level. Statistical modeling, on the other hand, is hard for humans to grasp, and therefore much more complicated, and therefore much less likely to have any explanatory power a priori, at least as far as most people are concerned.
Likelihood functions (P(e|h)) can be really hard to figure out. They require coming up with hypotheses that have the same causal structure as the real system they’re trying to predict. When most of our declarative mental models exist at the level of abstraction of human social dynamics, it can be difficult to accurately imagine all the interacting bodily systems and metabolic pathways that make NSAIDs (or any other drugs, to say nothing of whole foods) have the precise effect that they do.
Unfortunately, evolution didn’t equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of. That means the denominator in the equation above (∑iP(e|hi)P(hi)) is often much less than it should be, even if the priors and evidential likelihoods are all correct, because other hypotheses have not had a chance to weigh in. For most people, all future (or as-yet unheard-of) scientific discoveries are effectively given a prior probability of 0, while all the myths passed down from the tribal/religious/political elders seem to explain everything as well as anything they’ve ever heard, and so those stories get all the weight and all the acceptance.
It’s unavoidable for us as humans with Bayesian-ish brains to start coming up with stories to explain phenomena, even when evidence is lacking. We just need to be careful to cultivate an awareness for when our priors may be mistaken, for when our stories don’t have sufficiently reductionist internal causal structure to explain what they are meant to explain, and for when we probably haven’t even considered hypotheses that are anywhere close to the true explanation.
I am eager to explore your answer. Why do you think that “stories have the minimum level of internal complexity to explain the complex phenomena we experience”? Is it only because you suppose we internalize phenomena as stories? Do you have any data or studies on that? What’s your understanding of a story? Isn’t a straightforward description not even less complex because you do not need a full-blown plot to depict something like a chair?
I notice that while a lot of the answer is formal and well-grounded, “stories have the minimum level of internal complexity to explain the complex phenomena we experience” is itself a story :) Personally, I would say that any gear-level model will have gaps in the understanding, and trying to fill these gaps will require extra modeling which also has gaps, and so on forever. My guess is that part of our brain will constantly try to find the answers and fill the holes, like a small child asking “why x? …and why y?”. So if a more practical part of us wants to stop investigating, it plugs the holes with fuzzy stories which sound like understanding. Obviously, this is also a story, so discount it accordingly...
Yep. That’s just how humans think about it: complex phenomena require complex explanations. “Emergence,” as complexity arising from the many simple interactions of many simple components, I think is a pretty recent concept for humanity. People still think intelligent design makes more intuitive sense than evolution, for instance, even though the latter makes astronomically fewer assumptions and should be favored a priori by Occam’s Razor.
I don’t have anything to add, but this phenomenon was discussed in greater detail in Explain/Worship/Ignore. https://www.lesswrong.com/posts/yxvi9RitzZDpqn6Yh/explain-worship-ignore
By “story,” I mean something like a causal/conceptual map of an event/system/phenomenon, including things like the who, what, when, where, why, and how. At the level of sentences, this would be a map of all the words according to their semantic/syntactic role, like part of speech, with different slots for each role and connections relating them together. At the level of what we would normally call “stories,” such a story map would include slots for things like protagonist, antagonist, quest, conflict, plot points, and archetypes, along with their various interactions.
In the brain, these story maps/graphs could be implemented as regions of the cortex. Just as some cortical regions have retinotopic or somatotopic maps, more abstract regions may contain maps of conceptual space, along with neural connections between subregions that represent causal, structural, semantic, or social relationships between items in the map. Other brain regions may learn how to traverse these maps in systematic ways, giving rise to things like syntax, story structure, and action planning.
I’ve suggested before (https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/?commentId=PHYKtp7ACkoMf6hLe) that I think these sorts of maps may be key to understanding things like language and consciousness. Stories that can be loaded into and from long-term memory or transferred between minds via language can offer a huge selective advantage, both to individual humans and to groups of humans. I think the recogition, accumulation, and transmission of stories is actually pretty fundamental to how human psychology works.
Thank you for explaining it. I really like this concept for stories because it focuses on the psychological aspect of stories as understanding something which sometimes is missing in literary perspectives. How would you differentiate between a personal understanding of a definition and a story? Would you?
My main approach to stories is to define them more abstractly as a rhetorical device for representing change. This allows me to differentiatie between a story (changes), a description (states) and an argument (logical connections of assertions). I suppose, in your understanding, all of them would be some kind of story? This differentiation could also be helpful in understanding the process of telling a story versus giving a description.
Unfortunately, you did not explain how your answer relates to “stories have the minimum level of internal complexity to explain the complex phenomena we experience”. In your answer you do not compare stories to other ways of encoding information in the brain. Are there any others, in your opinion?
..and any effort to push against that, to assign more probability to the unknown hypotheses, is an effort in the direction of modest epistemology.