The compromise strategy just has to have average utility > x/n
I’m still not sure this is right. You have to consider not just fi(Si) but all the fi(Sj)‘s as well, i.e. how well each strategy scores under other planets’ utility functions. So I think the relevant cutoff here is 1.9 - a compromise strategy that does better than that under everyone’s utility function would be a win-win-win. The number of possible utility functions isn’t important, just their relative probabilities.
You’re right that it’s far from obvious that such a compromise strategy would exist in real life. It’s worth considering that the utility functions might not be completely arbitrary, as we might expect some of them to be a result of systematizing evolved social norms. We can exclude UFAI disasters from our reference class—we can choose who we want to play PD with, as long as we expect them to choose the same way.
I’m still not sure this is right. You have to consider not just fi(Si) but all the fi(Sj)‘s as well, i.e. how well each strategy scores under other planets’ utility functions. So I think the relevant cutoff here is 1.9 - a compromise strategy that does better than that under everyone’s utility function would be a win-win-win. The number of possible utility functions isn’t important, just their relative probabilities.
You’re right that it’s far from obvious that such a compromise strategy would exist in real life. It’s worth considering that the utility functions might not be completely arbitrary, as we might expect some of them to be a result of systematizing evolved social norms. We can exclude UFAI disasters from our reference class—we can choose who we want to play PD with, as long as we expect them to choose the same way.