Let’s take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn’t involve taking the AI’s perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The “right decision” is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal
Not the Doomsday Argument, but self-locating probabilities can certainly be useful in decision making, as Caspar Oesterheld and I argue for example here: http://www.cs.cmu.edu/~conitzer/FOCALAAAI23.pdf and show can be done consistently in various ways here: https://www.andrew.cmu.edu/user/coesterh/DeSeVsExAnte.pdf
Let’s take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn’t involve taking the AI’s perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The “right decision” is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal