This is good, but I’d add a caveat: it works best in a situation where “normal” is obviously not catastrophic. The airplane example is central to this category. However lift works, air travel is the safest method of getting from one continent to another ever devised by humanity. If you take DMT and finally become aware of the machine elves supporting the weight of each wing, you should congratulate them on their diligence and work ethic.
The second example, morality under MWI, veers closer to the edge of “normal is obviously not catastrophic.” MWI says you’re causally disconnected from other branches. If your good and bad actions had morally equivalent effects, you would not anticipate different observations than you would under “normality.”
As lincolnquirk pointed out, Covid and other long tail events are diametrically opposed to the “normal is obviously not catastrophic” category. Instead of the object-level belief being changed by a discussion on aerodynamic theory, it’s being changed by the plane suddenly falling out of the sky, in a way that’s incompatible with our previous model.
So, I’d tweak your adage: “promise yourself to keep steering the plane mostly as normal while you think about lift, as long as you’re in the reference class of events where steering the plane mostly as normal is the correct action.”
I’d modify that, since panic can make you falsely put yourself in weird reference classes in the short run. It’s more reliable IMO to ask whether anything has shifted massively in the external world at the same time as it’s shifted in your model.
How about promise yourself to keep steering the plane mostly as normal while you think about lift, as long as the plane seems to be flying normally?
This is good, but I’d add a caveat: it works best in a situation where “normal” is obviously not catastrophic. The airplane example is central to this category. However lift works, air travel is the safest method of getting from one continent to another ever devised by humanity. If you take DMT and finally become aware of the machine elves supporting the weight of each wing, you should congratulate them on their diligence and work ethic.
The second example, morality under MWI, veers closer to the edge of “normal is obviously not catastrophic.” MWI says you’re causally disconnected from other branches. If your good and bad actions had morally equivalent effects, you would not anticipate different observations than you would under “normality.”
As lincolnquirk pointed out, Covid and other long tail events are diametrically opposed to the “normal is obviously not catastrophic” category. Instead of the object-level belief being changed by a discussion on aerodynamic theory, it’s being changed by the plane suddenly falling out of the sky, in a way that’s incompatible with our previous model.
So, I’d tweak your adage: “promise yourself to keep steering the plane mostly as normal while you think about lift, as long as you’re in the reference class of events where steering the plane mostly as normal is the correct action.”
I’d modify that, since panic can make you falsely put yourself in weird reference classes in the short run. It’s more reliable IMO to ask whether anything has shifted massively in the external world at the same time as it’s shifted in your model.
How about promise yourself to keep steering the plane mostly as normal while you think about lift, as long as the plane seems to be flying normally?