That does make it somewhat more useful, if that’s the constraint under which it’s operating. It still strikes me as probable that, insofar as decision theory A+ makes decisions that theory A- does not, there must be some way to reward A- and punish A+. I may well be wrong about this. The other flaws, namely the fact that actual decision makers do not encounter omniscient entities with entirely inscrutable motives that are unwaveringly honest, still seem to render the pursuit futile. It’s decidedly less futile if Omega is constrained to outcome based reward/punishment.
That does make it somewhat more useful, if that’s the constraint under which it’s operating. It still strikes me as probable that, insofar as decision theory A+ makes decisions that theory A- does not, there must be some way to reward A- and punish A+. I may well be wrong about this. The other flaws, namely the fact that actual decision makers do not encounter omniscient entities with entirely inscrutable motives that are unwaveringly honest, still seem to render the pursuit futile. It’s decidedly less futile if Omega is constrained to outcome based reward/punishment.