Not a decision theorist, but my intuition on the first example with the bomb also says “take the bomb”. I don’t think it’s obvious or universal that one should choose to avoid burning slowly to death; the example may make more sense if one optimizes over “agents like me who encounter the box”, instead of “the specific agent who sees a bomb”; ie. acting under a Rawlsian veil. The standard argument is if you could commit yourself in advance to slowly burning to death if you see a bomb, you would certainly do so; the commitment all but guarantees it does not happen. For another example, “maximize payoff for any situation you find yourself in” fails to second-strike in global thermonuclear warfare (MAD), leading to the extinction of humanity. (This is not dissimilar to slowly burning to death.) So I think your “guaranteed payoff” rule is contradicted in practice; one may argue it does little more than judge FDT by CDT.
Not a decision theorist, but my intuition on the first example with the bomb also says “take the bomb”. I don’t think it’s obvious or universal that one should choose to avoid burning slowly to death; the example may make more sense if one optimizes over “agents like me who encounter the box”, instead of “the specific agent who sees a bomb”; ie. acting under a Rawlsian veil. The standard argument is if you could commit yourself in advance to slowly burning to death if you see a bomb, you would certainly do so; the commitment all but guarantees it does not happen. For another example, “maximize payoff for any situation you find yourself in” fails to second-strike in global thermonuclear warfare (MAD), leading to the extinction of humanity. (This is not dissimilar to slowly burning to death.) So I think your “guaranteed payoff” rule is contradicted in practice; one may argue it does little more than judge FDT by CDT.