An interesting idea, but I’m afraid the idea is little more than interesting. Given all your premises, it does follow that compromise would be the optimal strategy, but I find some of them unlikely:
That there is a small, easily computable number of potential utility functions, like 10 as opposed to 10^(2^100)
I have qualms with the assumption that these computed utility functions are added. I would more readily accept them being mutually exclusive (e.g. one potential utility function is “absorb all other worlds” or “defect in all inter-species prisoner’s games for deontological reasons”)
Though I won’t give it a probability estimate, I consider “humanity has worked out that it’s very likely that a lot of alien worlds exist” to be a potential defeater.
If none of those complaints holds up, I still see no reason to be worried about the result. Why worry about getting a higher score?
An interesting idea, but I’m afraid the idea is little more than interesting. Given all your premises, it does follow that compromise would be the optimal strategy, but I find some of them unlikely:
That there is a small, easily computable number of potential utility functions, like 10 as opposed to 10^(2^100)
I have qualms with the assumption that these computed utility functions are added. I would more readily accept them being mutually exclusive (e.g. one potential utility function is “absorb all other worlds” or “defect in all inter-species prisoner’s games for deontological reasons”)
Though I won’t give it a probability estimate, I consider “humanity has worked out that it’s very likely that a lot of alien worlds exist” to be a potential defeater.
If none of those complaints holds up, I still see no reason to be worried about the result. Why worry about getting a higher score?