Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me.
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
does not in any way convince me that my attempt to consult my own utility is “illusory.”
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)