A friend recently complained to me about this post: he said most people do much nonsense under the heading “belief”, and that this post doesn’t acknowledge this adequately. He might be right!
Given his complaint, perhaps I ought to say clearly:
1) I agree — there is indeed a lot of nonsense out there masquerading as sensible/useful cognitive patterns. Some aimed to wirehead or mislead the self; some aimed to deceive others for local benefit; lots of it simple error.
2) I agree also that a fair chunk of nonsense adheres to the term “belief” (and the term “believing in”). This is because there’s a real, useful pattern of possible cognition near our concepts of “belief”, and because nonsense (/lies/self-deception/etc) likes to disguise itself as something real.
3) But — to sort sense from nonsense, we need to understand what the real (useful, might be present in the cogsci books of alien intelligences) pattern is, that is near our “beliefs”. If we don’t:
a) We’ll miss out on a useful way to think. (This is the biggest one.)
b) The parts of the {real, useful way to think} that fall outside our conception of “beliefs” will be practiced noisily anyway, sometimes; sometimes in a true fashion, sometimes mixed (intentionally or accidentally) with error or locally manipulations. We won’t be able to excise these deceptions easily or fully, because it’ll be kinda clear there’s something to real nearby that our concept of “beliefs” doesn’t do justice to, and so people (including us) will not wish to adhere entirely to our concept of “beliefs” in lieu of the so-called “nonsense” that isn’t entirely nonsense. So it’ll be harder to expel actual error.
4) I’m pretty sure that LessWrong’s traditional concept of “beliefs” as “accurate Bayesian predictions about future events” is only half-right, and that we want the other half too, both for (3a) type reasons, and for (3b) type reasons.
a) “Beliefs” as accurate Bayesian predictions is exactly right for beliefs/predictions about things unaffected by the belief itself — beliefs about tomorrow’s weather, or organic chemistry, or the likely behavior of strangers.
b) But there’s a different “belief-math” (or “believing-in math”) that’s relevant for coordinating pieces of oneself in order to take a complex action, and for coordinating multiple people so as to run a business or community other collaborative endeavor. I think I lay it out here (roughly — I don’t have all the math), and I think it matters.
The old LessWrong Sequences-reading crowd *sort of* knew about this — folks talked about how beliefs about matters directly affected by the beliefs could be self-fulfilling or self-undermining prophecies, and how Bayes-math wasn’t defined around here. But when I read those comments, I thought they were discussing an uninteresting edge case. The idioms by which we organize complex actions (within a person, and between people) are part of the bread and butter of how intelligence works; they are not an uninteresting edge case.
Likewise, people talked sometimes (on LW in the past) about they were intentionally holding false beliefs about their start-ups’ success odds; and they were advised not to be clever, and some commenters dissented from this advice. But IMO the “believing in” concept lets us distinguish:
(i) the useful thing such CEOs were up to (holding a target, in detail, that they and others can coordinate action around);
(ii) how to do this without having or requesting false predictions at the same time; and
(iii) how sometimes such action on the part of CEOs/etc is basically “lying” (and “demanding lies”), in the sense that it is designed to extract more work/investment/etc from “allies” than said allies would volunteer if they understood the process generating the CEOs behavior (and demand that their team members are similarly deceptive/extractive). And sometimes it’s not. And there are principles for telling the difference.
All of which is sort of to say that I think this model of “believing in” has substance we can use for the normal human business of planning actions together, and isn’t merely propaganda to mislead people into thinking human thinking bugs are less buggy than they are. Also I think it’s as true to the normal English usage of “believing in” as the historical LW usage of “belief” is to the normal English usage of “belief”.
I agree with the sentence you quote from Vervaeke (“[myths] are symbolic stories of perennial patterns that are always with us”) but mostly-disagree with “myths … encapsulate some eternal and valuable truths” (your paraphrase).
As an example, let’s take the story of Cain and Abel. IMO, it is a symbolic story containing many perennial patterns:
When one person is praised, the not-praised will often envy them
Brothers often envy each other
Those who envy often act against those they envy
Those who envy, or do violence, often lie about it (“Am I my brother’s keeper?”)
Those who have done endured strange events sometimes have a “mark of Cain” that leads others to stay at a distance from them and leave them alone
I suspect this story and its patterns (especially back when there were few stories passed down and held in common) helped many to make conscious sense of what they were seeing, and to share their sense with those around them (“it’s like Cain and Abel”). But this help (if I’m right about it) would’ve been similar to the way words in English (or other natural languages) help people make conscious sense of what they’re seeing, and communicate that sense—myths helped people have short codes for common patterns, helped make those patterns available for including in hypotheses and discussions. But myths didn’t much help with making accurate predictions in one shot, the way “eternal and valuable truths” might suggest.
(You can say that useful words are accurate predictions, a la “cluster structures in thingspace”. And this is technically true, which is why I am only mostly disagreeing with “myths encapsulate some eternal and valuable truths”. But a good word helps differently than a good natural law or something does).
To take a contemporary myth local to our subculture: I think HPMOR is a symbolic story that helps make many useful patterns available to conscious thought/discussion. But it’s richer as a place to see motifs in action (e.g.
the way McGonagal initially acts the picture of herself who lives in her head; the way she learns to break her own bounds
) than as a source of directly stateable truths.