I’m also trying to avoid us becoming grabby aliens, but if -> Altruism is naturally derived from a broad world empowerment
Then it could be functional because the features of the combination of worldwide utilities (empower all agencies) *are* altruism, sufficiently to generalize in the ‘latent space of altruism’ which implies being careful about what you do to other planets
The maximizer worry would also be tamed by design
And in fact my focus on optionality would essentially be the same to a worldwide agency concern (but I’m thinking of an universal agency to completely erase the maximizer issue)
I’m also trying to avoid us becoming grabby aliens, but if
-> Altruism is naturally derived from a broad world empowerment
Then it could be functional because the features of the combination of worldwide utilities (empower all agencies) *are* altruism, sufficiently to generalize in the ‘latent space of altruism’ which implies being careful about what you do to other planets
The maximizer worry would also be tamed by design
And in fact my focus on optionality would essentially be the same to a worldwide agency concern (but I’m thinking of an universal agency to completely erase the maximizer issue)