I’m not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.
Obviously we aren’t rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don’t dispute its validity. Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.
As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I’ve already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is “illusory.”
Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me.
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
does not in any way convince me that my attempt to consult my own utility is “illusory.”
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)
I’m not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.
Obviously we aren’t rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don’t dispute its validity. Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.
As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I’ve already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is “illusory.”
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)