Preferences of any actual human seem to form a directed graph, but it’s incomplete and can contain cycles. Any way to transform it into a complete acyclic graph (any pair of situations comparable, no preference loops) must differ from the original graph somewhere.
What graph??? An accurate account should take care of every detail. I feel you are attacking some simplistic strawman, but I’m not sure of what kind.
Do you agree that it’s possible in principle to implement an artifact behaviorally indistinguishable from a human being that runs on expected utility maximization, with sufficiently huge “utility function” and some simple prior? Well, this claim seems to be both trivial and useless, as it speaks not about improvement, just surrogate.
What graph??? An accurate account should take care of every detail. I feel you are attacking some simplistic strawman, but I’m not sure of what kind.
Do you agree that it’s possible in principle to implement an artifact behaviorally indistinguishable from a human being that runs on expected utility maximization, with sufficiently huge “utility function” and some simple prior? Well, this claim seems to be both trivial and useless, as it speaks not about improvement, just surrogate.