A Friendly AI would have to be able to aggregate each person’s preferences into one utility function. The most straightforward and obvious way to do this is to agree on some way to normalize each individual’s utility function, and then add them up. But many people don’t like this, usually for reasons involving utility monsters.
I should think most of those who don’t like it do so because their values would be better represented by other approaches. A lot of those involved in the issue think they deserve more than a on-in-seven-billionth share of the future—and so pursue approaches that will help to deliver them that. This probably includes most of those with the skills to create such a future, and most of those with the resources to help fund them.
They could just insist on a normalization scheme that is blatantly biased in favor of their utility function. In a theoretical sense, this doesn’t cause a problem, since there is no objective way to define an unbiased normalization anyway. (of course, if everyone insisted on biasing the normalization in their favor, there would be a problem)
I think most of those involved realise that such projects tend to be team efforts—and therefore some compromises over values will be necessary. Anyway, I think this is the main difficulty for utilitarians: most people are not remotely like utilitarians—and so don’t buy into their bizarre ideas about what the future should be like.
I should think most of those who don’t like it do so because their values would be better represented by other approaches. A lot of those involved in the issue think they deserve more than a on-in-seven-billionth share of the future—and so pursue approaches that will help to deliver them that. This probably includes most of those with the skills to create such a future, and most of those with the resources to help fund them.
They could just insist on a normalization scheme that is blatantly biased in favor of their utility function. In a theoretical sense, this doesn’t cause a problem, since there is no objective way to define an unbiased normalization anyway. (of course, if everyone insisted on biasing the normalization in their favor, there would be a problem)
I think most of those involved realise that such projects tend to be team efforts—and therefore some compromises over values will be necessary. Anyway, I think this is the main difficulty for utilitarians: most people are not remotely like utilitarians—and so don’t buy into their bizarre ideas about what the future should be like.