These seem to be absolute optimums as far as I can tell.
Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.
Well, when I said “alter the universe into its worst and best possible configurations”, I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe’s past such that you had taken the other box and that that box had the same effect as the one you did pick.
However, upon further thought, that feels incredibly like cheating and arguing by definition.
Also, for the “opposite/other edge”, I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just “Break the premises and Omega’s predictions by begging the question!”, similar to above, so they’re fun to think about but useless in other respects.
Well, when I said “alter the universe into its worst and best possible configurations”, I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe’s past such that you had taken the other box and that that box had the same effect as the one you did pick.
However, upon further thought, that feels incredibly like cheating and arguing by definition.
Also, for the “opposite/other edge”, I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just “Break the premises and Omega’s predictions by begging the question!”, similar to above, so they’re fun to think about but useless in other respects.