I wonder if the mere use of Omega is tripping you up regarding this, or if perhaps it’s the abstraction of “truth” vs “lie” rather than any concrete example.
So here’s an example, straight from a spy-sf thriller of your choice. You’re a secret agent, conscripted against your will by a tyrannical dystopian government. Your agency frequently mind-scans you to see to if you have revealed your true occupation to anyone, and then kills them to protect the secrecy of your work. They also kill anyone to whom you say that you can’t reveal your true occupation, lest they become suspicious—the only allowed course of action is to lie plausibly.
Your dear old mom asks “What kind of job did they assign you to, Richard?” Now motivated purely by wishing her own benefit, do you:
a) tell her the truth, condemning her to die.
b) tell her a plausible lie, ensuring her continuing survival.
I just don’t find these extreme thought experiments useful. Parfit’s Hitchhiker is a practical problem. Newcomb’s Problem is an interesting conundrum. Omega experiments beyond that amount to saying “Suppose I push harder on this pan of the scales that you can push on the other. Which way will they go?” The question is trite and the answers nugatory. People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided. To accept it is already to have gone wrong.
If you’re trying to find the important decision point in real situations it can often be helpful to go to extremes to admit that things are possible. Ie, if the best lie is preferred to the worst truth, that implies that some truths are worse than some lies, and you can start talking about how to figure this out. If you just start with the actual question, you get people who say “No, the truth is most important”
Considering the extreme is only useful if the extreme is a realistic one—if it is the least convenient possible world. (The meaning of “possible” in this sentence does not include “probability 1/3^^^3″.) With extreme Omega-scenarios, the argument is nothing more than an outcome pump: you nail the conclusion you want to the ceiling of p=1 and confabulate whatever scenario is produced by conditioning on that hypothesis. The underlying structure is “Suppose X was the right thing to do—would X be the right thing to so?”, and the elaborate story is just a conjurer’s misdirection.
That’s one problem. A second problem with hypothetical scenarios, even realistic ones, is that they’re a standard dark arts tool. A would-be burner of Dawkins’ oeuvre presents a hypothetical scenario where suppressing a work would be the right thing to do, and triumphantly crows after you agree to it, “So, you do believe in censorship, we’re just quibbling over which books to burn!” In real life, there’s a good reason to be wary of hypotheticals: if you take them at face value, you’re letting your opponent write the script, and you will never be the hero in it.
People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided.
Harmful truths are not extreme hypotheticals, they’re a commonly recognized part of everyday existence.
You don’t show photographs of your poop to a person who is eating—it would be harmful to the eaters’ appetite.
You don’t repeat to your children every little grievance you ever had with their other parent—it might be harmful to them.
You don’t need to tell your child that they’re NOT your favourite child either.
Knowledge tends to be useful, but there’s no law in the universe that forces it to be always beneficial to you. You’ve not indicated any reason that it is so obliged to be in every single scenario.
Knowledge tends to be useful, but there’s no law in the universe that forces it to be always beneficial to you. You’ve not indicated any reason that it is so obliged to be in every single scenario.
Since I have not claimed it to be so, it is completely appropriate that I have given no reasons for it to be so.
I wonder if the mere use of Omega is tripping you up regarding this, or if perhaps it’s the abstraction of “truth” vs “lie” rather than any concrete example.
So here’s an example, straight from a spy-sf thriller of your choice. You’re a secret agent, conscripted against your will by a tyrannical dystopian government. Your agency frequently mind-scans you to see to if you have revealed your true occupation to anyone, and then kills them to protect the secrecy of your work. They also kill anyone to whom you say that you can’t reveal your true occupation, lest they become suspicious—the only allowed course of action is to lie plausibly.
Your dear old mom asks “What kind of job did they assign you to, Richard?” Now motivated purely by wishing her own benefit, do you: a) tell her the truth, condemning her to die. b) tell her a plausible lie, ensuring her continuing survival.
I just don’t find these extreme thought experiments useful. Parfit’s Hitchhiker is a practical problem. Newcomb’s Problem is an interesting conundrum. Omega experiments beyond that amount to saying “Suppose I push harder on this pan of the scales that you can push on the other. Which way will they go?” The question is trite and the answers nugatory. People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided. To accept it is already to have gone wrong.
If you’re trying to find the important decision point in real situations it can often be helpful to go to extremes to admit that things are possible. Ie, if the best lie is preferred to the worst truth, that implies that some truths are worse than some lies, and you can start talking about how to figure this out. If you just start with the actual question, you get people who say “No, the truth is most important”
Considering the extreme is only useful if the extreme is a realistic one—if it is the least convenient possible world. (The meaning of “possible” in this sentence does not include “probability 1/3^^^3″.) With extreme Omega-scenarios, the argument is nothing more than an outcome pump: you nail the conclusion you want to the ceiling of p=1 and confabulate whatever scenario is produced by conditioning on that hypothesis. The underlying structure is “Suppose X was the right thing to do—would X be the right thing to so?”, and the elaborate story is just a conjurer’s misdirection.
That’s one problem. A second problem with hypothetical scenarios, even realistic ones, is that they’re a standard dark arts tool. A would-be burner of Dawkins’ oeuvre presents a hypothetical scenario where suppressing a work would be the right thing to do, and triumphantly crows after you agree to it, “So, you do believe in censorship, we’re just quibbling over which books to burn!” In real life, there’s a good reason to be wary of hypotheticals: if you take them at face value, you’re letting your opponent write the script, and you will never be the hero in it.
Harmful truths are not extreme hypotheticals, they’re a commonly recognized part of everyday existence.
You don’t show photographs of your poop to a person who is eating—it would be harmful to the eaters’ appetite.
You don’t repeat to your children every little grievance you ever had with their other parent—it might be harmful to them.
You don’t need to tell your child that they’re NOT your favourite child either.
Knowledge tends to be useful, but there’s no law in the universe that forces it to be always beneficial to you. You’ve not indicated any reason that it is so obliged to be in every single scenario.
Since I have not claimed it to be so, it is completely appropriate that I have given no reasons for it to be so.