Two possible counterarguments about blackmail scenario:
Perfect rational policy and perfect rational actions aren’t compatible in some scenarios, Sometimes rational decision now is to transform yourself into less rational agent in the future. You can’t have your cake and eat it too.
If there is an (almost) perfect predictor in the scenario, you can’t be sure if you are real you or the model of you inside the predictor. Any argument in favor of you being real you should work equally for the model of you, otherwise it would be bad model. Yes, if you are so selfish that you don’t care about other instance of yourself, then you have a problem.
Yes, if you are so selfish that you don’t care about other instance of yourself, then you have a problem
If there is no objective fact that simulations of you are actually are you, and you subjectively don’t care about your simulations, where is the error? Rationality doesn’t require you to be unselfish...indeeed, decision theory is about being effectively selfish.
In fact, almost all humans don’t care equally about all instances of themselves. Currently, the only dimension we have is time, but there’s no reason to think that copies, especially non-interactive copies (with no continuity of future experiences) would be MORE important than 50-year-hence instances.
I’d expect this to be the common reaction: individuals care a lot about their instance, and kind of abstractly care about their other instances, but mostly in far-mode, and are probably not willing to sacrifice very much to improve that never-observable instance’s experience.
Note that this is DIFFERENT from committing to a policy that affects some potential instances and not others, without knowing which one will obtain.
If there is no objective fact that simulations of you are actually are you, and you subjectively don’t care about your simulations, where is the error?
I meant “if you are so selfish that your simulations/models of you don’t care about real you”.
Rationality doesn’t require you to be unselfish...indeeed, decision theory is about being effectively selfish.
Sometimes selfish rational policy requires you to become less selfish in your actions.
Two possible counterarguments about blackmail scenario:
Perfect rational policy and perfect rational actions aren’t compatible in some scenarios, Sometimes rational decision now is to transform yourself into less rational agent in the future. You can’t have your cake and eat it too.
If there is an (almost) perfect predictor in the scenario, you can’t be sure if you are real you or the model of you inside the predictor. Any argument in favor of you being real you should work equally for the model of you, otherwise it would be bad model. Yes, if you are so selfish that you don’t care about other instance of yourself, then you have a problem.
If there is no objective fact that simulations of you are actually are you, and you subjectively don’t care about your simulations, where is the error? Rationality doesn’t require you to be unselfish...indeeed, decision theory is about being effectively selfish.
In fact, almost all humans don’t care equally about all instances of themselves. Currently, the only dimension we have is time, but there’s no reason to think that copies, especially non-interactive copies (with no continuity of future experiences) would be MORE important than 50-year-hence instances.
I’d expect this to be the common reaction: individuals care a lot about their instance, and kind of abstractly care about their other instances, but mostly in far-mode, and are probably not willing to sacrifice very much to improve that never-observable instance’s experience.
Note that this is DIFFERENT from committing to a policy that affects some potential instances and not others, without knowing which one will obtain.
I meant “if you are so selfish that your simulations/models of you don’t care about real you”.
Sometimes selfish rational policy requires you to become less selfish in your actions.