(examples chosen for being at different points in the spectrum between the two options, not for being likely)
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that’s better than utility monsters.