Yes, if you are so selfish that you don’t care about other instance of yourself, then you have a problem
If there is no objective fact that simulations of you are actually are you, and you subjectively don’t care about your simulations, where is the error? Rationality doesn’t require you to be unselfish...indeeed, decision theory is about being effectively selfish.
In fact, almost all humans don’t care equally about all instances of themselves. Currently, the only dimension we have is time, but there’s no reason to think that copies, especially non-interactive copies (with no continuity of future experiences) would be MORE important than 50-year-hence instances.
I’d expect this to be the common reaction: individuals care a lot about their instance, and kind of abstractly care about their other instances, but mostly in far-mode, and are probably not willing to sacrifice very much to improve that never-observable instance’s experience.
Note that this is DIFFERENT from committing to a policy that affects some potential instances and not others, without knowing which one will obtain.
If there is no objective fact that simulations of you are actually are you, and you subjectively don’t care about your simulations, where is the error?
I meant “if you are so selfish that your simulations/models of you don’t care about real you”.
Rationality doesn’t require you to be unselfish...indeeed, decision theory is about being effectively selfish.
Sometimes selfish rational policy requires you to become less selfish in your actions.
If there is no objective fact that simulations of you are actually are you, and you subjectively don’t care about your simulations, where is the error? Rationality doesn’t require you to be unselfish...indeeed, decision theory is about being effectively selfish.
In fact, almost all humans don’t care equally about all instances of themselves. Currently, the only dimension we have is time, but there’s no reason to think that copies, especially non-interactive copies (with no continuity of future experiences) would be MORE important than 50-year-hence instances.
I’d expect this to be the common reaction: individuals care a lot about their instance, and kind of abstractly care about their other instances, but mostly in far-mode, and are probably not willing to sacrifice very much to improve that never-observable instance’s experience.
Note that this is DIFFERENT from committing to a policy that affects some potential instances and not others, without knowing which one will obtain.
I meant “if you are so selfish that your simulations/models of you don’t care about real you”.
Sometimes selfish rational policy requires you to become less selfish in your actions.