Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.
Yeah, the objection wasn’t supposed to be that because there was an implausible consequentialist view on that definition of “consequentialism”, it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn’t really worth making.
Is there a principled reason to worry about being in a simulation but not worry about being a Boltzmann brain?
Here are very similar arguments:
If posthumans run ancestor simulations, most of the people in the actual world with your subjective experiences will be sims.
If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.
Therefore, if posthumans run ancestor simulations, you are probably a sim.
vs.
If our current model of cosmology is correct, most of the beings in the history of the universe with your subjective experiences will be Boltzmann brains.
If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.
Therefore, if our current model of cosmology is correct, you are probably a Boltzmann brain.
Expanding your evidence from your present experiences to all the experiences you’ve had doesn’t help. There will still be lots more Boltzmann brains that last for as long as you’ve had experiences, having experiences just like yours. Most plausible ways of expanding your evidence have similar effects.
I suppose you could try arguing that the Boltzmann brain scenario, but not simulation scenario, is self-defeating. In the Boltzmann scenario, your reasons for accepting the theory (results of various experiments, etc) are no good, since none of it really happened. In the simulation scenario, you really did see those results, all the results were just realized in a funny sort of way that you didn’t expect. It would be nice if the relevance of this argument were better spelled out and cashed out in a plausible Bayesian principle.
edited for format