But sometimes, you’ll find a better solution than if you only lived in a moment.
Yes, I see that your decision theory (is it the same as Eliezer’s?) gives better solutions in the following circumstances:
dealing with Omega
dealing with copies of oneself
cooperating with a counterpart in another possible world
Do you think it gives better solutions in the case of AIs (who don’t initially think they’re copies of each other) trying to cooperate? If so, can you give a specific scenario and show how the solution is derived?
Yes, I see that your decision theory (is it the same as Eliezer’s?) gives better solutions in the following circumstances:
dealing with Omega
dealing with copies of oneself
cooperating with a counterpart in another possible world
Do you think it gives better solutions in the case of AIs (who don’t initially think they’re copies of each other) trying to cooperate? If so, can you give a specific scenario and show how the solution is derived?