Two other arguments in favor of cooperating with humans:
1) Any kind of utility function that creates an incentive to take control of the whole universe (whether for intrinsic or instrumental purposes) will mark the agent as a potential eternal enemy to everyone else. Acting on those preferences are therefore risky and should be avoided—such as changing one’s preference for total control into a preference for being tolerant (or maybe even for beneficence).
2) Most, if not all, of us would probably be willing to help any intelligent creature to create some way for them to experience positive human emotions (e.g. happiness, ecstasy, love, flow, determination, etc), as long as they engage with us as friends.
Two other arguments in favor of cooperating with humans:
1) Any kind of utility function that creates an incentive to take control of the whole universe (whether for intrinsic or instrumental purposes) will mark the agent as a potential eternal enemy to everyone else. Acting on those preferences are therefore risky and should be avoided—such as changing one’s preference for total control into a preference for being tolerant (or maybe even for beneficence).
2) Most, if not all, of us would probably be willing to help any intelligent creature to create some way for them to experience positive human emotions (e.g. happiness, ecstasy, love, flow, determination, etc), as long as they engage with us as friends.