If you can imagine a potential worry, then you can generate that worry. Rationalism is, in part, the skill of never being predictably surprised by things you already foresaw.
It may be that you need to “wear another hat” in order to pull that worry out of your brain, or to model another person advising you to get your thoughts to flow that way, but whatever your process, anything you can generate for yourself is something you can foresee and consider. This aspect of rationalism is the art of “mining out your future cognition,” to exactly the extent that you can foresee it, leaving whatever’s left over a mystery to be updated on new observations.
For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.
This realization can take quite a load off your mind. You need not worry about how to interpret every possible experimental result to confirm your theory. You needn’t bother planning how to make any given iota of evidence confirm your theory, because you know that for every expectation of evidence, there is an equal and oppositive expectation of counterevidence. If you try to weaken the counterevidence of a possible “abnormal” observation, you can only do it by weakening the support of a “normal” observation, to a precisely equal and opposite degree. It is a zero-sum game. No matter how you connive, no matter how you argue, no matter how you strategize, you can’t possibly expect the resulting game plan to shift your beliefs (on average) in a particular direction.
You might as well sit back and relax while you wait for the evidence to come in.
The citation link in this post takes you to a NSFW subthread in the story.
“If you know where you’re going, you should already be there.”
…
“It’s the second discipline of speed, which is fourteenth of the twenty-seven virtues, reflecting a shard of the Law of Probability that I’ll no doubt end up explaining later but I’m not trying it here without a whiteboard.”
“As a human discipline, ‘If you know your destination you are already there’ is a self-fulfilling prediction about yourself, that if you can guess what you’re going to realize later, you have already realized it now. The idea in this case would be something like, because mental qualities do not have intrinsic simple inertia in the way that physical objects have inertia, there is the possibility that if we had sufficiently mastered the second layer of the virtue of speed, we would be able to visualize in detail what it would be like to have recovered from our mental shocks, and then just be that. For myself, that’d be visualizing where I’ll already be in half a minute. For yourself, though this would be admittedly harder, it’d be visualizing what it would be like to have recovered from the Worldwound. Maybe we could just immediately rearrange our minds like that, because mental facts don’t have the same kinds of inertia as physical objects, especially if we believe about ourselves that we can move that quickly.”
“I, of course, cannot actually do that, and have to actually take the half a minute. But knowing that I’d be changing faster if I was doing it ideally is something I can stare at mentally and then change faster, because we do have any power at all to change through imagining other ways we could be, even if not perfectly. Another line of that verse goes, ‘You can move faster if you’re not afraid of speed.’”
…
“Layer three is ‘imaginary intelligence is real intelligence’ and it means that if you can imagine the process that produces a correct answer in enough detail, you can just use the imaginary answer from that in real life, because it doesn’t matter what simulation layer an answer comes from. The classic exercise to develop the virtue is to write a story featuring a character who’s much smarter than you, so you can see what answers your mind produces when you try to imagine what somebody much smarter than you would say. If those answers are actually better, it means that your own model of yourself contains stupidity assertions, places where you believe about yourself that you reason in a way which is incorrect or just think that your brain isn’t supposed to produce good answers; such that when you instead try to write a fictional character much smarter than you, your own actual brain, which is what’s ultimately producing those answers, is able to work unhindered by your usual conceptions of the ways in which you think that you’re a kind of person stupider than that.”
If you can imagine a potential worry, then you can generate that worry. Rationalism is, in part, the skill of never being predictably surprised by things you already foresaw.
It may be that you need to “wear another hat” in order to pull that worry out of your brain, or to model another person advising you to get your thoughts to flow that way, but whatever your process, anything you can generate for yourself is something you can foresee and consider. This aspect of rationalism is the art of “mining out your future cognition,” to exactly the extent that you can foresee it, leaving whatever’s left over a mystery to be updated on new observations.
Minor spoilers for mad investor chaos and the woman of asmodeus (planecrash Book 1).
The citation link in this post takes you to a NSFW subthread in the story.