I wonder if the mere use of Omega is tripping you up regarding this, or if perhaps it’s the abstraction of “truth” vs “lie” rather than any concrete example.
So here’s an example, straight from a spy-sf thriller of your choice. You’re a secret agent, conscripted against your will by a tyrannical dystopian government. Your agency frequently mind-scans you to see to if you have revealed your true occupation to anyone, and then kills them to protect the secrecy of your work. They also kill anyone to whom you say that you can’t reveal your true occupation, lest they become suspicious—the only allowed course of action is to lie plausibly.
Your dear old mom asks “What kind of job did they assign you to, Richard?” Now motivated purely by wishing her own benefit, do you:
a) tell her the truth, condemning her to die.
b) tell her a plausible lie, ensuring her continuing survival.
I just don’t find these extreme thought experiments useful. Parfit’s Hitchhiker is a practical problem. Newcomb’s Problem is an interesting conundrum. Omega experiments beyond that amount to saying “Suppose I push harder on this pan of the scales that you can push on the other. Which way will they go?” The question is trite and the answers nugatory. People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided. To accept it is already to have gone wrong.
If you’re trying to find the important decision point in real situations it can often be helpful to go to extremes to admit that things are possible. Ie, if the best lie is preferred to the worst truth, that implies that some truths are worse than some lies, and you can start talking about how to figure this out. If you just start with the actual question, you get people who say “No, the truth is most important”
Considering the extreme is only useful if the extreme is a realistic one—if it is the least convenient possible world. (The meaning of “possible” in this sentence does not include “probability 1/3^^^3″.) With extreme Omega-scenarios, the argument is nothing more than an outcome pump: you nail the conclusion you want to the ceiling of p=1 and confabulate whatever scenario is produced by conditioning on that hypothesis. The underlying structure is “Suppose X was the right thing to do—would X be the right thing to so?”, and the elaborate story is just a conjurer’s misdirection.
That’s one problem. A second problem with hypothetical scenarios, even realistic ones, is that they’re a standard dark arts tool. A would-be burner of Dawkins’ oeuvre presents a hypothetical scenario where suppressing a work would be the right thing to do, and triumphantly crows after you agree to it, “So, you do believe in censorship, we’re just quibbling over which books to burn!” In real life, there’s a good reason to be wary of hypotheticals: if you take them at face value, you’re letting your opponent write the script, and you will never be the hero in it.
People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided.
Harmful truths are not extreme hypotheticals, they’re a commonly recognized part of everyday existence.
You don’t show photographs of your poop to a person who is eating—it would be harmful to the eaters’ appetite.
You don’t repeat to your children every little grievance you ever had with their other parent—it might be harmful to them.
You don’t need to tell your child that they’re NOT your favourite child either.
Knowledge tends to be useful, but there’s no law in the universe that forces it to be always beneficial to you. You’ve not indicated any reason that it is so obliged to be in every single scenario.
Knowledge tends to be useful, but there’s no law in the universe that forces it to be always beneficial to you. You’ve not indicated any reason that it is so obliged to be in every single scenario.
Since I have not claimed it to be so, it is completely appropriate that I have given no reasons for it to be so.
Then you lose. You also maintain your ideological purity with respect to epistemic rationality. Well, perhaps not. You will likely end up with an overall worse map of the territory (given that the drastic loss of instrumental resources probably kills you outright rather than enabling your ability to seek out other “truth” indefinitely). We can instead say that at least you refrained from making a deontological violation against an epistemic rationality based moral system.
Even if it’s a basilisk? Omega says: “Surprise! You’re in a simulation run by what you might as well consider evil demons, and anyone who learns of their existence will be tortured horrifically for 3^^^3 subjective years. Oh, and by the way, the falsehood was that the simulation is run by a dude named Kevin who will offer 3^^^3 years of eutopian bliss to anyone who believes he exists. I would have used outside-of-the-Matrix magic to make you believe that was true. The demons were presented with elaborate thought experiments when they studied philosophy in college, so they think it’s funny to inflict these dilemmas on simulated creatures. Well, enjoy!”
If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree. But that response pretty clearly meets the conditions of the original hypothetical, which is why I would trust Omega. If I somehow learned that knowing the truth could cause so much disutility, I would significantly revise my estimate that we live in a Lovecraftian horror-verse with basilisks floating around everywhere.
I honestly didn’t notice the missing word. I seemed to have just read “units” as a default. My reference was to the long time user by that name who does, in fact, deal in bliss of a certain kind.
If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree.
That’s the problem. The question is the rationalist equivalent of asking “Suppose God said he wanted you to kidnap children and torture them?” I’m telling Omega to just piss off.
The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it’s rational to win, not complain that you’re being punished for making the “right” choice. As with Newcomb’s Problem, if you can predict in advance that the choice you’ve labelled “right” has less utility than a “wrong” choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega’s being a jerk. It does that. But that doesn’t change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own “rationality”. This implies a flaw in your system of rationality.
The bearing this has on applied rationality is that this problem serves as a least convenient possible world
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It’s like Pascal’s Mugging. Sure, there can be things you’re better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can’t lift it.
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It’s like Pascal’s Mugging. Sure, there can be things you’re better off not knowing, but the thing to do is to level up your ability to handle it.
Leveling up is great, but I’m still not going to try to beat up an entire street-gang just to steal their bling. I don’t have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do.
The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can’t lift it.
No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to “handle the truth” isn’t especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.
I choose the truth.
Omega’s assurances imply that I will not be in the valley of bad rationality mentioned later.
Out of curiosity, I also ask Omega to also show me the falsehood, without the brain alteration, so I can see what I might have ended up believing.
I wonder if the mere use of Omega is tripping you up regarding this, or if perhaps it’s the abstraction of “truth” vs “lie” rather than any concrete example.
So here’s an example, straight from a spy-sf thriller of your choice. You’re a secret agent, conscripted against your will by a tyrannical dystopian government. Your agency frequently mind-scans you to see to if you have revealed your true occupation to anyone, and then kills them to protect the secrecy of your work. They also kill anyone to whom you say that you can’t reveal your true occupation, lest they become suspicious—the only allowed course of action is to lie plausibly.
Your dear old mom asks “What kind of job did they assign you to, Richard?” Now motivated purely by wishing her own benefit, do you: a) tell her the truth, condemning her to die. b) tell her a plausible lie, ensuring her continuing survival.
I just don’t find these extreme thought experiments useful. Parfit’s Hitchhiker is a practical problem. Newcomb’s Problem is an interesting conundrum. Omega experiments beyond that amount to saying “Suppose I push harder on this pan of the scales that you can push on the other. Which way will they go?” The question is trite and the answers nugatory. People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided. To accept it is already to have gone wrong.
If you’re trying to find the important decision point in real situations it can often be helpful to go to extremes to admit that things are possible. Ie, if the best lie is preferred to the worst truth, that implies that some truths are worse than some lies, and you can start talking about how to figure this out. If you just start with the actual question, you get people who say “No, the truth is most important”
Considering the extreme is only useful if the extreme is a realistic one—if it is the least convenient possible world. (The meaning of “possible” in this sentence does not include “probability 1/3^^^3″.) With extreme Omega-scenarios, the argument is nothing more than an outcome pump: you nail the conclusion you want to the ceiling of p=1 and confabulate whatever scenario is produced by conditioning on that hypothesis. The underlying structure is “Suppose X was the right thing to do—would X be the right thing to so?”, and the elaborate story is just a conjurer’s misdirection.
That’s one problem. A second problem with hypothetical scenarios, even realistic ones, is that they’re a standard dark arts tool. A would-be burner of Dawkins’ oeuvre presents a hypothetical scenario where suppressing a work would be the right thing to do, and triumphantly crows after you agree to it, “So, you do believe in censorship, we’re just quibbling over which books to burn!” In real life, there’s a good reason to be wary of hypotheticals: if you take them at face value, you’re letting your opponent write the script, and you will never be the hero in it.
Harmful truths are not extreme hypotheticals, they’re a commonly recognized part of everyday existence.
You don’t show photographs of your poop to a person who is eating—it would be harmful to the eaters’ appetite.
You don’t repeat to your children every little grievance you ever had with their other parent—it might be harmful to them.
You don’t need to tell your child that they’re NOT your favourite child either.
Knowledge tends to be useful, but there’s no law in the universe that forces it to be always beneficial to you. You’ve not indicated any reason that it is so obliged to be in every single scenario.
Since I have not claimed it to be so, it is completely appropriate that I have given no reasons for it to be so.
Then you lose. You also maintain your ideological purity with respect to epistemic rationality. Well, perhaps not. You will likely end up with an overall worse map of the territory (given that the drastic loss of instrumental resources probably kills you outright rather than enabling your ability to seek out other “truth” indefinitely). We can instead say that at least you refrained from making a deontological violation against an epistemic rationality based moral system.
Even if it’s a basilisk? Omega says: “Surprise! You’re in a simulation run by what you might as well consider evil demons, and anyone who learns of their existence will be tortured horrifically for 3^^^3 subjective years. Oh, and by the way, the falsehood was that the simulation is run by a dude named Kevin who will offer 3^^^3 years of eutopian bliss to anyone who believes he exists. I would have used outside-of-the-Matrix magic to make you believe that was true. The demons were presented with elaborate thought experiments when they studied philosophy in college, so they think it’s funny to inflict these dilemmas on simulated creatures. Well, enjoy!”
If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree. But that response pretty clearly meets the conditions of the original hypothetical, which is why I would trust Omega. If I somehow learned that knowing the truth could cause so much disutility, I would significantly revise my estimate that we live in a Lovecraftian horror-verse with basilisks floating around everywhere.
3^^^3 units of simulated Kratom?
Oops, meant to say “years.” Fixed now. Thanks!
I honestly didn’t notice the missing word. I seemed to have just read “units” as a default. My reference was to the long time user by that name who does, in fact, deal in bliss of a certain kind.
Omega could create the demons when you open the box, or if that’s too truth-twisting, before asking you.
That’s the problem. The question is the rationalist equivalent of asking “Suppose God said he wanted you to kidnap children and torture them?” I’m telling Omega to just piss off.
The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it’s rational to win, not complain that you’re being punished for making the “right” choice. As with Newcomb’s Problem, if you can predict in advance that the choice you’ve labelled “right” has less utility than a “wrong” choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega’s being a jerk. It does that. But that doesn’t change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own “rationality”. This implies a flaw in your system of rationality.
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It’s like Pascal’s Mugging. Sure, there can be things you’re better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can’t lift it.
Leveling up is great, but I’m still not going to try to beat up an entire street-gang just to steal their bling. I don’t have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do.
No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to “handle the truth” isn’t especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.
Omega isn’t showing up right now.
No non-fictional Omega is at that level either.
Then it would seem you need to delegate your decision theoretic considerations to those better suited to abstract analysis.