Why can’t it weight actions based on what we as a society want/like/approve/consent/condone?
Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.
What I’m describing isn’t really a utility function, it’s more like a policy, or policy function. Its policy would be volatile, or at least, more volatile than the common understanding LW has of a set-in-stone utility function.
What would be in charge of changing the policy?
There is no reason to assume that an AI with goals that are hostile to us, despite our intentions, is stupid.
Humans often use birth control to have sex without procreating. If evolution were a more effective design algorithm it would never have allowed such a thing.
The fact that we have different goals from the system that designed us does not imply that we are stupid or incoherent.