If you believe that human morality is isomorphic to preference utilitarianism—a claim that I do not endorse, but which is not trivially false—then using preferences from a particular point in time should work fine, assuming those preferences belong to humans. (Presumably humans would not value the creation of minds with other utility functions if this would obligate us to, well, value their preferences.)
If you believe that human morality is isomorphic to preference utilitarianism—a claim that I do not endorse, but which is not trivially false—then using preferences from a particular point in time should work fine, assuming those preferences belong to humans. (Presumably humans would not value the creation of minds with other utility functions if this would obligate us to, well, value their preferences.)