It also helps that most Bayesian decision algorithms actually take on the arg max_a U(a)*P(a) reasoning of Evidential Decision Theory, which means that whenever you invoke your self-image as a capital-B Bayesian you are semi-consciously invoking Evidential Decision Theory, which does actually get the right answer, even if it messes up on other problems.
(Commenting because I got here while looking for citations for my WIP post about another way to handle Newcomb-like problems.)
It also helps that most Bayesian decision algorithms actually take on the
arg max_a U(a)*P(a)
reasoning of Evidential Decision Theory, which means that whenever you invoke your self-image as a capital-B Bayesian you are semi-consciously invoking Evidential Decision Theory, which does actually get the right answer, even if it messes up on other problems.(Commenting because I got here while looking for citations for my WIP post about another way to handle Newcomb-like problems.)