I’ll break down that point in case it’s non-obvious. Utilons do not exist in the real world—there is no method of measuring utilons.
(There is no method in the context of this discussion, but figuring out how to “measure utilons” (with respect to humans) is part of the FAI problem. If an agent doesn’t maximize utility suggested by that agent’s construction (in the same sense as human preference can hopefully be defined based on humans), that would count as a failure of that agent’s rationality.)
(There is no method in the context of this discussion, but figuring out how to “measure utilons” (with respect to humans) is part of the FAI problem. If an agent doesn’t maximize utility suggested by that agent’s construction (in the same sense as human preference can hopefully be defined based on humans), that would count as a failure of that agent’s rationality.)