The truth and falsehood themselves are irrelevant to the actual outcomes, since another superintelligence (or maybe even Omega itself) is directly conditioning on your learning of these “facts” in order to directly alter the universe into its worst and best possible configurations, respectively.
Good edge case.
These seem to be absolute optimums as far as I can tell.
Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.
If we posit that Omega has actual influential power over the universe and is dynamically attempting to create those optimal information boxes, then this seems like the only possible resulting scenario. If Omega is sufficiently superintelligent and the laws of entropy hold, then this also seems like the only possible resulting scenario for most conceivable minds, even if Omega’s only means of affecting the universe is the information contained in the boxes.
This seems right, and the minds that are an exception here that are most easy to conceive are ones where the problem is centered around specific high emphasis within their utility function on events immediately surrounding the decision itself (ie. the “other edge” case).
These seem to be absolute optimums as far as I can tell.
Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.
Well, when I said “alter the universe into its worst and best possible configurations”, I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe’s past such that you had taken the other box and that that box had the same effect as the one you did pick.
However, upon further thought, that feels incredibly like cheating and arguing by definition.
Also, for the “opposite/other edge”, I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just “Break the premises and Omega’s predictions by begging the question!”, similar to above, so they’re fun to think about but useless in other respects.
Good edge case.
Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.
This seems right, and the minds that are an exception here that are most easy to conceive are ones where the problem is centered around specific high emphasis within their utility function on events immediately surrounding the decision itself (ie. the “other edge” case).
Well, when I said “alter the universe into its worst and best possible configurations”, I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe’s past such that you had taken the other box and that that box had the same effect as the one you did pick.
However, upon further thought, that feels incredibly like cheating and arguing by definition.
Also, for the “opposite/other edge”, I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just “Break the premises and Omega’s predictions by begging the question!”, similar to above, so they’re fun to think about but useless in other respects.