Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.
The theme here seems similar to Stuart’s past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.
Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.
The theme here seems similar to Stuart’s past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.