A note on hypotheticals

People frequently describe hypothetical situations on LW. Often, other people make responses that suggest they don’t understand the purpose of hypotheticals.

  • When someone puts forth the hypothetical A, it doesn’t mean they believe it is true. They may be trying to show not(A).

  • When someone posits A ⇒ B (A implies B), it doesn’t mean that they believe A is true. The proposition A ⇒ B is commonly used to prove that B is true, or that A is false.

  • A solution to a hypothetical scenario is useful only if, when you map it back into the original domain, it solves the original problem.

I’ll expand on the last point. Sorry for being vague. I’m trying not to name names.

When a hypothetical is put forward to test a theory, ignore aspects of the hypothetical scenario that don’t correspond to parts of the theory. Don’t get emotionally involved. Don’t think of the hypothetical as a narrative. A hypothetical about Omega sounds a lot like a story about a genie from a lamp, but you should approach it in a completely different way. Don’t try to outsmart Omega (unless you’re making a point about the impossibility of an Omega who can eg decide undecidable problems). When you find a loophole in the way the hypothetical is posed, that doesn’t exist in the original domain, point it out only if you are doing so to improve the phrasing of the hypothetical situation.

John Searle’s Chinese Room is an example of a hypothetical in which it is important to not get emotionally involved. Searle’s conclusion is that the man in the Chinese room doesn’t understand Chinese; therefore, a computer doesn’t understand Chinese. His model maps the running software onto the complete system of room plus man plus cards; but when he interprets it, he empathizes with the human on each half of the mapping, and so maps the locus of consciousness from the running software onto just the man.1

Sometimes it’s difficult to know whether your solution to a hypothetical is exploiting a loophole in the hypothetical, or finding a solution to the original problem. But when the original problem is testing a mathematical model, it’s usually obvious. There are a few general situations where it’s not obvious.

Consciousness is often a tricky area that makes it hard to tell whether you are looking at a solution to the original problem, or a loophole in the hypothetical. Sometimes the original problem is a paradox because of consciousness, so you can’t map it away. In the Newcomb paradox, if you replace the person with a computer program, people would be much quicker to say: You should write a computer program that will one-box. But you can phrase it that way only if you’re sure that the Newcomb paradox isn’t really a question about free will. The “paradox” might be regarded as the assertion that there is no such thing as free will.

Another tricky case involves infinities. A paradox of infinities typically involves taking two different infinities, but treating them as a single infinity, so that they don’t cancel out the way they should, or do cancel out when they shouldn’t. Zeno’s paradox is an example: The hypothetical doesn’t notice that the infinity of intervals is cancelled out by their infinitesimal sizes. Eliezer discusses some other cases here.

Another category of tricky cases is when the hypothetical involves impossibilities. It’s possible to accidentally construct a hypothetical that makes an assumption that isn’t valid in our universe. (I think these paradoxes were unknown before the 20th century, but there may be a math example.) These crop up frequently in modern physics. The ultraviolet catastrophe may be the first such paradox discovered. The hypothetical in which a massive black hole suddenly appears one light-minute away from you, and you want to know how you can be influenced by its gravity before gravity waves have time to reach you, might be an example. The aspect of the Newcomb paradox that allows Omega to predict what you will do without fail may be such a flawed hypothetical.

If you are giving a solution to a hypothetical scenario that tests a mathematical model, and your response doesn’t use math, and doesn’t hinge on a consciousness, infinity, or impossibility from the original problem domain, your response is likely irrelevant.

1 He makes other errors as well. It’s a curious case in which amassing a large number of errors in a model makes it harder to rebut, because it’s harder to figure out what the model is. This is a clever way of exploiting the scientific process. Searle takes on challengers one at a time. Each challenger, being a scientist, singles out one error in Searle’s reasoning. Searle uses other errors in his reasoning to construct a workaround; and may immediately move on to another challenger, and use the error that the original challenger focused on to work around the error that the second challenger focused on. This sort of trickery can be detected by looking at all of someone’s counterarguments en masse, and checking that they all define the same terms the same way, and agree on which statements are assumptions and which statements are conclusions.