In situations with multiple copies of an agent or some sort of anthropic uncertainty, asking for probability of being in a certain situation is misleading, the morally real thing is probability of those situations/worlds themselves (which acts as a degree of caring), not probability of being in them. And even that probability depends on your decisions, to the point where your decisions in some situations can make those situations impossible.
That’s a really useful distinction. Thank you for taking the time to make it! I also think that I made it sound like “simulator” worlds allow for objective morality. In actuality, I think a supra-universal reality might allow for simulator worlds, and a supra-universal reality might allow for objective morality (by some definitions of it), but the simulator worlds and the objective morality aren’t directly related in their own right.
In situations with multiple copies of an agent or some sort of anthropic uncertainty, asking for probability of being in a certain situation is misleading, the morally real thing is probability of those situations/worlds themselves (which acts as a degree of caring), not probability of being in them. And even that probability depends on your decisions, to the point where your decisions in some situations can make those situations impossible.
That’s a really useful distinction. Thank you for taking the time to make it! I also think that I made it sound like “simulator” worlds allow for objective morality. In actuality, I think a supra-universal reality might allow for simulator worlds, and a supra-universal reality might allow for objective morality (by some definitions of it), but the simulator worlds and the objective morality aren’t directly related in their own right.
The best argument that we are in a simulation is mine, I think: https://link.springer.com/article/10.1007/s00146-015-0620-9