I think according to UDT, it doesn’t make a difference whether or not you’re in a me-simulation, because a priori you don’t know which person it is that has consciousness, and you’re not allowed to update once you find out. Presumably the UDT algorithm’s output will be logically correlated with your choices regardless of whether or not you are actually conscious.
In more detail: UDT has to either output “Michaël Trazzi should be an EA” or “Michaël Trazzi should be an ethical egoist”. To compute the value of these different outputs it computes the expected value over all possible ways the world could be, which include not just Michaël Trazzi being in a me-simulation but also each of the other 7 billion people being in such a simulation. In the latter case the EV is clearly greater if Michaël Trazzi is an EA, so that is what UDT would output.
I guess this doesn’t matter if your solipsism is just a cover for regular egoism (which it sounds like is the case from your other comments).
I think according to UDT, it doesn’t make a difference whether or not you’re in a me-simulation, because a priori you don’t know which person it is that has consciousness, and you’re not allowed to update once you find out. Presumably the UDT algorithm’s output will be logically correlated with your choices regardless of whether or not you are actually conscious.
In more detail: UDT has to either output “Michaël Trazzi should be an EA” or “Michaël Trazzi should be an ethical egoist”. To compute the value of these different outputs it computes the expected value over all possible ways the world could be, which include not just Michaël Trazzi being in a me-simulation but also each of the other 7 billion people being in such a simulation. In the latter case the EV is clearly greater if Michaël Trazzi is an EA, so that is what UDT would output.
I guess this doesn’t matter if your solipsism is just a cover for regular egoism (which it sounds like is the case from your other comments).