Adding Up To Normality
Related: Leave a Line of Retreat, Living In Many Worlds
“It all adds up to normality.” Greg Egan, Quarantine
You’re on an airplane at 35,000 feet, and you strike up a conversation about aerodynamic lift with the passenger in your row. Things are going along just fine until they point out to you that your understanding of lift is wrong, and that planes couldn’t fly from the effect you thought was responsible.
Should you immediately panic in fear that the plane will plummet out of the sky?
Obviously not; clearly the plane has been flying just fine up until now, and countless other planes have flown as well. There has to be something keeping the plane up, even if it’s not what you thought, and even if you can’t yet figure out what it actually is. Whatever is going on, it all adds up to normality.
Yet I claim that we often do this exact kind of panicked flailing when there’s a challenge to our philosophical or psychological beliefs, and that this panic is entirely preventable.
I’ve experienced and/or seen this particular panic response when I, or others, encounter good arguments for propositions including
My religion is not true. (“Oh no, then life and morality are meaningless and empty!”)
Many-worlds makes the most sense. (“Oh no, then there are always copies of me doing terrible things, and so none of my choices matter!”)
Many “altruistic” actions actually have hidden selfish motives. (“Oh no, then altruism doesn’t exist and morality is pointless!”)
I don’t have to be the best at something in order for it to be worth doing. (“Oh no, then others won’t value me!”) [Note: this one is from therapy; most people don’t have the same core beliefs they’re stuck on.]
(I promise these are not in fact strawmen. I’m sure you can think of your own examples. Also remember that panicking over an argument in this way is a mistake even if the proposition turns out to be false.)
To illustrate the way out, let’s take the first example. It took me far too long to leave my religion, partly because I was so terrified about becoming a nihilist if I left that I kept flinching away from the evidence. (Of course, the religion proclaimed itself to be the origin of morality, and so it reinforced the notion that anyone else claiming to be moral was just too blind to see that their lack of faith implied nihilism.)
Eventually I did make myself face down, not just the object-level arguments, but the biases that had kept me from looking directly at them. And then I was an atheist, and still I was terrified of becoming a nihilist (especially about morality).
So I did one thing I still think was smart: I promised myself not to change all of my moral rules at once, but to change each one only when (under sober reflection) I decided it was wrong. And in the meantime, I read a lot of moral philosophy.
Over the next few months, I began relaxing the rules that were obviously pointless. And then I had a powerful insight: I was so cautious about changing my rules because I wanted to help people and not slide into hurting them. Regardless of what morality was, in fact, based on, the plane was still flying just fine. And that helped me sort out the good from the bad among the remaining rules, and to stop being so afraid of what arguments I might later encounter.
So in retrospect, the main thing I’d recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift (to stretch the analogy). If you decide that something major is false, it doesn’t mean that everything that follows from it has to be discarded immediately. (False things imply both true and false things!)
You’ll generally find that many important things stand on their own without support from the old belief. (Doing this for the other examples I gave, as well as your own, is left to you.) Other things will collapse, and that’s fine; that which can be destroyed by the truth should be. Just don’t make all of these judgments in one fell swoop.
One last caution: I recommend against changing meta-level rules as a result of changing object-level beliefs. The meta level is how you correct bad decisions on the object level, and it should only be updated by very clear reasoning in a state of equilibrium. Changing your flight destination is perfectly fine, but don’t take apart the wing mid-flight.
Good luck out there, and remember:
It all adds up to normality.
[EDIT 2020-03-25: khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you’re updating on that. (Maybe there’s a new crisis in the external world that contradicts one of your beliefs, or maybe you update to believe that the thing you’re about to do could actually cause a major catastrophe.)
In that case, you can try and land the plane safely- focus on getting to a safer state for yourself and the world, so that you have time to think things over. And if you can’t do that, then you have no choice but to rethink your piloting on the fly, accepting the danger because you can’t escape it. But these experiences will hopefully be very rare for you, current global crisis excepted.]
- Should rationalists be spiritual / Spirituality as overcoming delusion by Mar 25, 2024, 4:48 PM; 49 points) (
- Pragmatism and Completeness by Jun 12, 2020, 4:34 PM; 16 points) (
- Jul 20, 2024, 3:09 PM; 11 points) 's comment on A more systematic case for inner misalignment by (
- The Born Rule is Time-Symmetric by Nov 1, 2020, 11:24 PM; 1 point) (
I like this post.
This is a good, short, memorable proverb to remember the point of the post by.
I think the strongest version of this idea of adding p to normality is “new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations.” Therefore, when one’s actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen—regardless of any new information.But this strict version of ‘adding up to normality’ does not apply in situations where one’s actions are contingent on unobservables. In cases where new evidence/knowledge may cause someone to dramatically revise the implications of previous observations, things don’t add up to normality. Whether this is the case or not for you as an individual depends on your gears-level understanding of your observations.
I somewhat disagree with this. I think, in these kinds of situations, the recommendation should be more along the lines of “promise yourself to make the best risk/reward trade-off you can given your state of uncertainty.” If you’re flying in a plane that has a good track record of flying, definitely don’t touch anything because its more risky to break something that has evidence of working than it is rewarding to fix things that might not actually work. But if you’re flying in the world’s first plane and realize you don’t understand lift, land it as soon as possible.
Some Reasons Things Add Up to Normality
If you think the thing you don’t understand might be a Chesterton’s Fence, there’s a good chance it will add up to normality
If you think the thing you don’t understand can be predicted robustly by inductive reasoning and you only care about being able to accurately predict the thing itself, there’s a good chance it will add up to normality
Some Examples where Things Don’t Add Up
Example #1 (Moral Revisionism)
You’re an eco-rights activist who has tirelessly worked to make the world a better place by protecting wildlife because you believe animals have the right to live good lives on this planet too. Things are going just fine until your friend claims that R-selection implies most animals live short horrible lives and you realize you have no idea whether animals actually live good lives in the wild. Should you immediately panic in fear that you’re making things worse?
Yes. Whether or not the claim in question is accurate, your general assumption that protecting wildlife implies improved animal welfare was not well-founded enough to address significant moral risk. You should really stop doing wildlife stuff until you get this figured out or you could actually cause bad things to happen.
Example #2 (Prediction Revisionism)
You’ve built an AGI and, with all your newfound free-time and wealth, you have a lengthy chat with a mathematician. Things are going along just fine until they point out to you that your understanding of the safety measures used to ensure alignment are wrong, and that the AGI shouldn’t be aligned from the safety measures you thought were responsible.Should you immediately panic in fear that the AGI will destroy us all?
Yes. The previous observations are not sufficient to make reliable predictions. But note that a random bystander who is uninvolved with AGI development would be justified in not panicking—their gears-level understanding hinges on believing that the people who created the AGI are competent enough to address safety, not on believing that the specific details designed to make the AGI safe actually work.
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.
Obviously it applies if you’re the lead on a new technological project and suddenly realize a plausible catastrophic risk from it.
I don’t think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.
Yeah, but my point is not about catastrophic risk—it’s about the risk/reward trade-off in general. You can have risk>reward in scenarios that aren’t catastrophic. Catastrophic risk is just a good general example of where things don’t add up to normality (catastrophic risks by nature correspond to not-normal scenarios and also coincide with high risk). Don’t promise yourself to steer the plane mostly as normal, promise yourself to pursue the path that reduces risk over all outcomes you’re uncertain about.
Good point, it really depends on the details of the example but this is just because of the different risk-reward trade-offs, not because you ought to always treat things as adding up to normality. I’ll counter that while you shouldn’t leave the job (high risk, hard to reverse), you should see if you could use your PTO as soon as possible so you can figure things out without potentially causing further negative impact. It all depends on the risk-reward trade-off:
If stopping activism corresponds to something like leaving a job, which is hard to reverse, doing so involves taking on a lot of risk if you’re uncertain and waiting for a bit can reduce that risk.
If stopping activism corresponds to something like shifting your organizations priorities, and your organization’s path can be reversed, then stopping work (after satisfying all existing contracts of course) is pretty low risk and you should stop
If stopping activism corresponds to donating large amounts of money (in earning-to-give contexts), your strategy can easily be reversed and you should stop now.
This is true even if you only have “small” amounts of impact.
Caveat:
People engage in policies for many reasons at once. So if you think the goal of your policy is X, but it’s actually X, Y and Z, then dramatic actions justified on uncertainty about X alone will probably be harmful due to Y and Z effects even if its the appropriate decision with respect to X. Because it’s easy to notice when why a thing might go wrong (like X) and hard to notice why they’re going right (like Y and Z), adding-up-to-normality serves as a way to generally protect Y and Z.
Don’t know if you saw, but I updated the post yesterday because of your (and khafra’s) points.
Also, your caveat is a good reframe of the main mechanism behind the post.
I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.
In the absence of an external crisis, taking relatively safe actions (and few irreversible actions) is correct in the short term, and the status quo is going to be reasonably safe for most people if you’ve been living it for years. If you can back off from newly-suspected-wrong activities for the time being without doing so irreversibly, then yes that’s better.
Ah, yeah I agree with this observation—and it could be good to just assume things add up to normality as a general defense against people rapidly taking naive actions. Scarcity bias is a thing after all and if you get into a mindset where now is the time to act, it’s really hard to prevent yourself from acting irrationally.
Huzzah, convergence! I appreciate the points you’ve made.
I agree with this, but a counterpoint is that it’s very hard for people to change longstanding habits and behaviors at all, and sometimes a major internal update is a good moment to make significant behavior changes because that’s the only time most people can manage major behavioral changes at all.
This reminds me of the Discourse on Method.
(This is probably 5% of the text. There is more interesting stuff there, but it’s less relevant to this post.)
This is good, but I’d add a caveat: it works best in a situation where “normal” is obviously not catastrophic. The airplane example is central to this category. However lift works, air travel is the safest method of getting from one continent to another ever devised by humanity. If you take DMT and finally become aware of the machine elves supporting the weight of each wing, you should congratulate them on their diligence and work ethic.
The second example, morality under MWI, veers closer to the edge of “normal is obviously not catastrophic.” MWI says you’re causally disconnected from other branches. If your good and bad actions had morally equivalent effects, you would not anticipate different observations than you would under “normality.”
As lincolnquirk pointed out, Covid and other long tail events are diametrically opposed to the “normal is obviously not catastrophic” category. Instead of the object-level belief being changed by a discussion on aerodynamic theory, it’s being changed by the plane suddenly falling out of the sky, in a way that’s incompatible with our previous model.
So, I’d tweak your adage: “promise yourself to keep steering the plane mostly as normal while you think about lift, as long as you’re in the reference class of events where steering the plane mostly as normal is the correct action.”
I’d modify that, since panic can make you falsely put yourself in weird reference classes in the short run. It’s more reliable IMO to ask whether anything has shifted massively in the external world at the same time as it’s shifted in your model.
How about promise yourself to keep steering the plane mostly as normal while you think about lift, as long as the plane seems to be flying normally?
That seems to me to be a superposition of two different arguments.
There’s a philosophy-of-science claim that any theory that isn’t obviously wrong must be compatible with all observations to date.
And there’s a kind of normative claim that you shouldn’t change your behaviour a lot when you switch from one ontology to another.
The sameness of predicted observations is just the sameness of predicted observation, not everything. Interpretations of quantum mechanics, to be taken seriously, must agree on the core set of observations, but they can and do vary in their ontological implications. They have to differ about something., or they wouldn’t be different interpretations.
But it is entirely possible for ethics to vary with ontology. It is uncontroversial that the possibility of free will impacts ethics, at the theoretical level. Why shouldn’t the possibility of many worlds?
May not be necessarily true, but it is not necessarily false. It is not absurd, it is a reasonable thing to worry about … at the theoretical level.
But that doesn’t contradict the other version of “it all adds up to normality”, because that claim is a piece of practical advice. Although it seems possible for deep theoretical truths of metaphysics to impact ethics, the connection is to complex and doubtful to be allowed to affect day-to-day behaviour.
I’ve seen this advice / philosophical point a few times (and I mostly agree with it), but I don’t feel like I have a complete understanding of it. Specifically, when does this not apply?
For instance, coronavirus: to me, this doesn’t “add up to normality” and I’m trying to sort out how it’s an exception. As soon as we heard about the coronavirus, the correct action was to take prep advice seriously and go prepare; and governments moved far too slowly on updating their recommendations; etc. Life after coronavirus is super different than life before. If you were reciting “it all adds up to normality” while reading about corona, you’d probably miss some important opportunities to take quick action.
My guess is that the rule is not supposed to apply to coronavirus (perhaps it’s too object-level?) but I don’t exactly understand why not.
I think khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you’re updating on that. (In this case, the signs would be watching people you respect tell you to start prepping immediately- act on the warning lights in the cockpit rather than waiting for the engines to fail.)
The rule might fail the covid test, but still be the correct tradeoff. Also, even though the mainstream moved relatively slowly about covid, you would not reduce your risk that much by being more vigilant than them. They were still pretty fast.
I have to disagree with you there. Thanks to my friends’ knowledge, I stopped my parents from taking a cross-country flight in early March, before much of the media reported that there was any real danger in doing so. You can’t wave off the value of truly thinking through things.
But don’t confuse “my model is changing” with “the world is changing”, even when both are happening simultaneously. That’s my point.
One very common pitfall here that you mention, and that is inherited from Eliezer’s writings, is related to the potential infinite universes and many worlds. “But many worlds implies...” No, it doesn’t. Whether some physical model of the world that is believed to be the one truth by the site founder gets some experimental evidence for or against some day need not affect your morality here and now. Or ever, for that matter, unless there is some day a proven way to interact with those hypothetical selves. The effects of your actions are limited to a tiny part of the observable universe, and that is only if you believe that you have free will. Which is another pitfall, “but if I don’t have free will, nothing matters.” Nothing objectively matters anyway, the meaning is inside the algorithm that is your mind. Hopefully that algorithm is robust enough to resist the security holes in it, called here infohazards and such.
It seems implausible that a physical theory of the universe, especially one so fundamental to our understanding of matter, would have literally no practical implications. The geocentric and heliocentric model of the solar system give you the same predictions about where the stars will be in the sky, but the heliocentric model gives some important implications for the ethics of space travel. Other scientific revolutions have similarly had enormous effects on our interpretation of the world.
Can you point to why this physical dispute is different?
What are those implications? I tend to prefer dealing with applications, not implications, so not sure what you mean.
Without heliocentrism (and its extension to other stars), it seems that the entire idea of going to space and colonizing the stars would not be on the table, because we wouldn’t fundamentally even understand what stuff was out there. Since colonizing space is arguably the number one long-term priority for utilitarians, heliocentrism is therefore a groundbreaking theory of immense ethical importance. Without it, we would not have any desire to expand beyond the Earth.
Colonizing the universe is indeed an application.
Well, if you have a space program and you’re dealing with crystal spheres...
Then exploring these crystal spheres without crashing into them might be a thing to do. Applications.