First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will). Thus calling your sadistic jailor (SJ) Omega is misleading, so I won’t.
Second, given that SJ is not Omega, your problem is underspecified, and I will try to steelman it a bit, though, honestly, it should have been your job.
What other information, not given in the setup, is relevant to making a decision? For example, do you know of any prior events of this kind conducted by SJ? What were the statistical odds of survival? Is there something special about the reference class of survivors and/or the reference class of victims? What happened to the cheaters who tried to escape the box? How trustworthy is SJ?
Suppose, for example, that SJ is very accurate. First, how would you know that? Maybe there is a TV camera in the box and other people get to watch you, after SJ made its prediction known to the outside world but not to you. In this situation, as others suggested, you ought to get something like 50⁄50 odds by simply flipping a coin.
Now, if you consider the subset of all prior subjects who flipped a coin, or did some other ostensibly unpredictable choice, what is their survival rate? If it’s not close to 50%, then SJ can predict the outcome of a random event better than chance (if it was worse than chance, SJ would simply learn after a few tries and flip its prediction, assuming it wants to guess right to begin with).
So the only interesting case that we have to deal is when the subjects who do not choose at random have a higher survival rate than those who do. How can this happen? First, if the randoms’ survival rate is below 50%, and assuming the choice is truly random, SJ likely knows more about the world than our current best physical models (which cannot predict an outcome of a quantum coin flip), in which case it is simply screwing around with you. If the randoms’ survival rate is about 50% but the non-randoms fare better, even though they are more predictable, it means that SJ favors non-randoms instead of doing its best predicting. So, again, it is screwing around with you, punishing the process, not the decision.
So this analysis means that, unless randoms get 50% and non-randoms are worse, you are dealing with an adversarial opponent, and your best chance of survival is to study and mimic whatever the best non-randoms do.
First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will).
This is false. The setup is not incompatible with Omega being a perfect predictor. The fact that you cannot do the opposite of what the perfect predictor knows does not make the scenario with Omega incoherent because the scenario does not require that this has happened (or even could happen). Examining the scenario:
An evil Omega has locked you in a box. Inside, there is a bomb and a button. Omega informs you that in an hour the bomb will explode, unless you do the opposite of what Omega predicted you will do. Namely, press the button if it predicted you won’t or vice versa. In that case, the bomb won’t explode and the box will open, letting you free.
We have an assertion “X unless Y”. Due to the information we have available about Y (the nature of Omega, etc) we can reason that Y is false. We then have “X unless false” which represents the same information as the assertion “X”. Similar reasoning applies to anything of the form “IF false THEN Z”. Z merely becomes irrelevant.
The scenario with Omega is not incoherent. It is merely trivial, inane and pointless. In fact, the first postcript (“PS. You have no chance to survive make your time.”) more or less does all the (minimal) work of reasoning out the implications of the scenario for us.
Thus calling your sadistic jailor (SJ) Omega is misleading, so I won’t.
I’m still wary of calling the Sadistic Jailor Omega even though the perfect prediction part works fine. Because Omega is supposed to be arbitrarily and limitedly benevolent, not pointlessly sadistic. When people make hypotheticals which require a superintelligence that is a dick they sometimes refer to “Omega’s cousin X” or similar, a practice that appeals to me.
First, note that the setup is incompatible with Omega being a perfect predictor (you cannot possibly do the opposite of what the perfect predictor knows you will). Thus calling your sadistic jailor (SJ) Omega is misleading, so I won’t.
Second, given that SJ is not Omega, your problem is underspecified, and I will try to steelman it a bit, though, honestly, it should have been your job.
What other information, not given in the setup, is relevant to making a decision? For example, do you know of any prior events of this kind conducted by SJ? What were the statistical odds of survival? Is there something special about the reference class of survivors and/or the reference class of victims? What happened to the cheaters who tried to escape the box? How trustworthy is SJ?
Suppose, for example, that SJ is very accurate. First, how would you know that? Maybe there is a TV camera in the box and other people get to watch you, after SJ made its prediction known to the outside world but not to you. In this situation, as others suggested, you ought to get something like 50⁄50 odds by simply flipping a coin.
Now, if you consider the subset of all prior subjects who flipped a coin, or did some other ostensibly unpredictable choice, what is their survival rate? If it’s not close to 50%, then SJ can predict the outcome of a random event better than chance (if it was worse than chance, SJ would simply learn after a few tries and flip its prediction, assuming it wants to guess right to begin with).
So the only interesting case that we have to deal is when the subjects who do not choose at random have a higher survival rate than those who do. How can this happen? First, if the randoms’ survival rate is below 50%, and assuming the choice is truly random, SJ likely knows more about the world than our current best physical models (which cannot predict an outcome of a quantum coin flip), in which case it is simply screwing around with you. If the randoms’ survival rate is about 50% but the non-randoms fare better, even though they are more predictable, it means that SJ favors non-randoms instead of doing its best predicting. So, again, it is screwing around with you, punishing the process, not the decision.
So this analysis means that, unless randoms get 50% and non-randoms are worse, you are dealing with an adversarial opponent, and your best chance of survival is to study and mimic whatever the best non-randoms do.
This is false. The setup is not incompatible with Omega being a perfect predictor. The fact that you cannot do the opposite of what the perfect predictor knows does not make the scenario with Omega incoherent because the scenario does not require that this has happened (or even could happen). Examining the scenario:
We have an assertion “X unless Y”. Due to the information we have available about Y (the nature of Omega, etc) we can reason that Y is false. We then have “X unless false” which represents the same information as the assertion “X”. Similar reasoning applies to anything of the form “IF false THEN Z”. Z merely becomes irrelevant.
The scenario with Omega is not incoherent. It is merely trivial, inane and pointless. In fact, the first postcript (“PS. You have no chance to survive make your time.”) more or less does all the (minimal) work of reasoning out the implications of the scenario for us.
I’m still wary of calling the Sadistic Jailor Omega even though the perfect prediction part works fine. Because Omega is supposed to be arbitrarily and limitedly benevolent, not pointlessly sadistic. When people make hypotheticals which require a superintelligence that is a dick they sometimes refer to “Omega’s cousin X” or similar, a practice that appeals to me.