If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree.
That’s the problem. The question is the rationalist equivalent of asking “Suppose God said he wanted you to kidnap children and torture them?” I’m telling Omega to just piss off.
The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it’s rational to win, not complain that you’re being punished for making the “right” choice. As with Newcomb’s Problem, if you can predict in advance that the choice you’ve labelled “right” has less utility than a “wrong” choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega’s being a jerk. It does that. But that doesn’t change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own “rationality”. This implies a flaw in your system of rationality.
The bearing this has on applied rationality is that this problem serves as a least convenient possible world
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It’s like Pascal’s Mugging. Sure, there can be things you’re better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can’t lift it.
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It’s like Pascal’s Mugging. Sure, there can be things you’re better off not knowing, but the thing to do is to level up your ability to handle it.
Leveling up is great, but I’m still not going to try to beat up an entire street-gang just to steal their bling. I don’t have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do.
The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can’t lift it.
No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to “handle the truth” isn’t especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.
That’s the problem. The question is the rationalist equivalent of asking “Suppose God said he wanted you to kidnap children and torture them?” I’m telling Omega to just piss off.
The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it’s rational to win, not complain that you’re being punished for making the “right” choice. As with Newcomb’s Problem, if you can predict in advance that the choice you’ve labelled “right” has less utility than a “wrong” choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega’s being a jerk. It does that. But that doesn’t change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own “rationality”. This implies a flaw in your system of rationality.
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It’s like Pascal’s Mugging. Sure, there can be things you’re better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can’t lift it.
Leveling up is great, but I’m still not going to try to beat up an entire street-gang just to steal their bling. I don’t have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do.
No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to “handle the truth” isn’t especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.
Omega isn’t showing up right now.
No non-fictional Omega is at that level either.
Then it would seem you need to delegate your decision theoretic considerations to those better suited to abstract analysis.