Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.
But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.
Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.
But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.