Slack for your belief system
Follow-up to Zvi’s post on Slack
You can have Slack in your life. But you can also have Slack in your belief system.
Initially, this seems like it might be bad.
Won’t Slack result in a lack of precision? If I give myself Slack to believe in whatever, won’t I just end up with a lot of wrong beliefs? Shouldn’t I always be trying to decrease the amount of Slack in my beliefs, always striving to walk the narrow, true path?
Claims:
For some things, the only way to stumble upon the Truth is to have some Slack. In other words, having no Slack in your belief system can result in getting stuck at local optima.
Having Slack allows you to use fake frameworks in a way that isn’t epistemically harmful.
If you are, in fact, just correct, I guess you should have zero Slack. But—just checking—are you ALSO correct about how you come to Know Things? If your way of coming to conclusions is even a little off, giving yourself zero Slack might be dangerous. (Having zero Slack in your meta process multiplies the problem of no-Slack to all downstream beliefs.)
I’m willing to make the more unbacked, harder-to-define claim that there exists no individual human alive who should have zero Slack in their beliefs, on the meta level. (In other words, no human has a truth-seeking process that will reliably get all the right answers.)
[ I want to note that I fully believe I could be wrong about all four claims here, or thinking about this in the entirely wrong way. So fight me. ]
Now, I’m going to specifically discuss Slack in one’s meta process.
So, while I can apply the concept of Slack to individual beliefs themselves (aka “holding beliefs lightly”), I am applying the concept more to the question of “How do I come to know/understand anything or or call a thing true?”
So, I’m not discussing examples of “I believe X, with more or less Slack.” I’m discussing the difference between, “Doing a bunch of studies is the only way to know things” (less Slack) vs. “Doing a bunch of studies is how I currently come to know things, but I’m open to other ways” (more Slack).
The less Slack there is in your process for forming beliefs, the more constraints you have to abide before being able to claim you’ve come to understand something.
Examples of such constraints include:
I only buy it if it has had at least one peer-reviewed RCT.
This framework seems like it’ll lead to confirmation bias, so I will ignore it.
If it involves politics or tribalism or status, it can’t have any truth to it.
If it’s self-contradictory / paradoxical, it has to be one way or the other.
I can’t imagine this being true or useful because my gut reaction to it is negative.
I don’t feel anything about it, so it must be meaningless.
This doesn’t conform to my narrative or worldview. In fact it’s offensive to consider, so I won’t.
If I thought this, it would likely result in harm to myself or others, so I can’t think it.
It’s only true if I can prove it.
It’s only worth considering if it’s been tested empirically.
I should discard models that aren’t made of gears.
Note that sometimes, it is good to have such constraints, at least for now.
Not everyone can interact with facts, claims, and beliefs without some harm to their epistemics. In fact, most people cannot, I claim. (And further, I believe this to be one of the most important problems in rationality.)
That said, I see a lot of people’s orientations as:
“My belief-forming process says this thing isn’t true, and in fact this entire class of thing is likely false and not worth digging into. You seem to be actively engaging with [class of thing] and claiming there is truth in it. That seems highly dubious—there is something wrong with your belief-forming process.”
This is a reasonable stance to take.
After all, lots of things aren’t worth digging into. And lots of people have bad truth-seeking processes. Theirs may very well be worse than yours; you don’t have to consider something just because it’s in front of you.
But if you notice yourself unwilling to engage with [entire class of thing]… to me this indicates something is suboptimal.
Over time, it seems good to aim for being able to engage with more classes of things, rather than fewer.
If something is politically charged, yes, your beliefs are at risk, and you may be better off avoiding the topic altogether. But—wouldn’t it be nice, if one day, you could wade through the mire of politics and come out the other side, clean? Epistemics in tact? Even better, you come out the other side having realized new truths about the world?
I guess if I’m going to be totally honest, the reason I am saying this is because I feel annoyed when people dismiss entire [classes of thing] for reasons like, “That part of the territory is really swampy and dangerous! Going in there is bad, and you’re probably compromised.”
At least some of the time, the thing that is going on is the person just figured out how to navigate swamps.
But instead, I feel like the person lacks Slack in their belief-forming process and is also trying to enforce this lack of Slack onto others.
From the inside, I imagine this feels like, “No one can navigate swamps, and anyone who says they are is probably terribly mistaken or naive about how truth-seeking works, so I should inform them of the danger.”
From the inside, Slack will feel incorrect or potentially dangerous. Without constraints, the person may feel like they’ll go off the rails—maybe they’ll even end up believing in *gasp* horoscopes or *gasp* the existence of a Judeo-Christian God.
My greatest fear is not having false beliefs. My greatest fear is getting trapped into a particular definition of truth-seeking, such that I permanently end up with many false beliefs or large gaps in my map.
The two things I do to avoid this are:
a) Learn more skills for navigating tricky territories. For example, one of the skills is noticing a belief that’s in my mind because it would be beneficial for me to believe it, i.e. it makes me feel good in a certain way or I expect good things to happen as a result—say, it’d make a person like me more if I believed it. This likely requires a fair amount of introspective capacity.
b) Be open to the idea that other people have truth-seeking methods that I don’t. That they’re seeing entire swaths of reality I can’t see. Be curious about that, and try to learn more. Develop taste around this. Maintain some Slack, so I don’t become myopic.
- Criticism Scheduling and Privacy by Sep 29, 2018, 1:50 PM; 41 points) (
- Jun 7, 2020, 4:20 PM; 3 points) 's comment on Bob Jacobs’s Shortform by (
Someone mentioned Paul Feyerabend in response to this post. He was in favor of having slack in science, and I resonate strongly with some of these descriptions:
The following is also a nice thing to keep in mind. Although less about slack and more about the natural pull to use tools like science to further political/moral aims.
The following is more controversial, and I don’t fully agree with it. But it contains some interesting thought nuggets.
My more charitable interpretation is that, Science is a nicely rigorous method for truth-seeking, but because of its standards for rigor, it ends up missing things (like the ‘ki’ example from In praise of fake frameworks).
Also, I sense elitist attitudes from science / rationality / EA as not entirely justified. (Possibly this elitism is even counter to the stated goals of each.) I feel like I often witness ‘science’ or ‘rationality’ getting hijacked for goals unrelated to truth-seeking. And I’m currently a tiny bit skeptical of the confidence of EA’s moral authority.
The opening Feyeraband quote is sounds very similar to (Scott’s review of) Kuhn’s Structure of Scientific Revolutions. Related: Jacob’s post on the copernican revolution from the inside.
Attempt at definition.
If I have less slack in my belief system, that means I have more constraints in what counts as ‘evidence’ for a given statement or more preconceptions about what can count as ‘true’ or ‘real’.
Either, I can be looking for specific signs/evidence/proofs/data (“I will only consider X if you can prove Y.” “I will only consider X if you show me a person who flosses with their shoelace.”).
Or, I can be looking for certain categories or classes of evidence (“I will only consider X if there are studies showing X.” “I will only consider X if your arguments takes a certain form.” “I will only consider X if 5 experts agree.” Etc.)
Sometimes, it’s better to have less slack. It makes sense for certain fields of mathematics to have very little slack.
Other times, it hinders progress.
Are you trying to define Slack in Your Belief System as “this is what those words together naturally mean” or are you defining it as “this is a useful concept to think about and this is what I choose to name it”?
Before reading your take, I thought about what ‘slack in your belief system’ would mean to me, and I came up with a lot of different things it could mean. Mostly my System-1 response was that SIYBS links back into Anna’s concept that flinching away from truth is about protecting the epistomology: What beliefs could you change without changing everything? What must you defend lest huge chains of logic unravel and mindsets shift in disruptive ways? But also the simple, ‘how tightly am I holding onto these beliefs’ type of thing, a kind of uncertainty, how much you would update on new evidence in an area at all. That does go hand in hand with what types of evidence you’d update on. Often I think people have places where the ‘wrong’ kind of evidence is allowed, because there isn’t ‘right’ evidence that needs to be dislodged, so you have more slack in those places, and so on. Kind of a levels-of-evidence thing. Also could be thought of as how well your system can adjust to counterfactuals or fake frameworks or ad argumentos, and still give reasonable answers.
I do think there’s a specific kind of lack of Slack where you decide that something is Scientific and therefore you can only change your beliefs based on Proper Scientific Studies, and that this is very easy to take too far (evidence is and always is evidence). What this is really saying is that your prior is actually damn close to 0 or 1 at this point, so other types of evidence aren’t likely to cut it, and/or that you think people are trying to trick you so you have to disregard such evidence?
Anyway, it’s certainly an interesting thing to think about, and this is already pretty rambly, so I’ll stop here.
Promoted to the frontpage. I like this post, and think it’s an interesting extension and clarification of the Slack concept, though I still didn’t walk away with a very precise sense of what “Slack in one’s belief system” actually means, and I still have a few competing interpretations of the details. But I like the point overall. I would be glad to see people in the comments or in future posts to try to give a more rigorous definition of Slack in this context.
I was also somewhat unsure about the Slack-In-Belief-System definition and thought it’d have been nice to open with a sentence or two clarifying that.
It sounded something like a combination of “having some chunk of probability space for ‘things I haven’t thought of ’ or ‘I might be wrong’”, as well as something vaguer like “being a bit flexible about how you come to believe things.”
Yes! And it should come right between these two lines
Is it bad? I don’t know. You haven’t told me what you mean by Slack in belief system yet!
this is one of the posts when i wish for three examples for the thingy described. because i see two options:
1. this is weakman of the position i hold, in which i seek the ways to draw a map that correspond to the territory, and have my estimations of what work and what no, and disagree with someone about that. and the someone instead of providing evidence that his method providing good predictions or insights, just say i should have more slack.
all you description on why believe in things sounds anti-Beysian. it’s not boolean believe-disbelieve. update yourself incrementally! if i believe something provide zero evidence i will not update, if the deviance dubious, i will update only a little. and then the question is how much credence you assign to what evidence, and methods to find evidence.
2. it’s different worlds situation, when the post writer encountered problem i didn’t.
and i have no way to judge that, without at least one, and better more, actual examples of the interaction, better linked to and not described by the author.