The obvious way to combine the two systems—tolerance and utility—is to say that stimuli that exceed our tolerances prompt us to ask questions about how to solve a problem, and utility calculations answer those questions. This is not an original idea on my part, but I do not remember where I read about it.
What decision is made when multiple choices all leave the variables within tolerance?
The one that appears to maximize utility after a brief period of analysis. For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.
What decision is made when none of the available choices leave the variables within tolerance?
A lack of acceptable alternatives leads to stress, which (a) broadens the range of acceptable outcomes, and (b) motivates future analysis about how to avoid similar situations in the future. For example, I want ice cream; my ice cream satisfaction index is well below tolerance; unfortunately, I am in the desert. I find this situation unpleasant, and eventually reconcile myself to the fact that my ice cream satisfaction level will remain below what was previously thought of as ‘minimum’ tolerance for some time, however, upon returning to civilization, I will have a lower tolerance for ‘desert-related excursions’ and may attempt to avoid further trips through the desert.
Note that ‘minimum’ tolerance refers to the minimum level that will lead to casual selection of an acceptable alternative, rather than the minimum level that allows my decision system to continue functioning.
For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.
Actually, I’d tend to say that you are not so much maximizing the utility of your ice cream choice, as you are ensuring that your expected satisfaction with your choice is within tolerance.
To put it another way, it’s unlikely that you’ll actually weigh price, cost, and taste, in some sort of unified scoring system.
Instead, what will happen is that you’ll consider options that aren’t already ruled out by cached memories (e.g. you hate that flavor), and then predict whether that choice will throw any other variables out of tolerance. i.e., “this one costs too much… those nuts will give me indigestion… that’s way too big for my appetite… this one would taste good, but it just doesn’t seem like what I really want...”
Yes, some people do search for the “best” choice in certain circumstances, and would need to exhaustively consider the options in those cases. But this is not a matter of maximizing some world-state-utility, it is simply that each choice is also being checked against a, “can I be certain I’ve made the best choice yet?” perception.
Even when we heavily engage our logical minds in search of “optimum” solutions, this cognition is still primarily guided by these kinds of asynchronous perceptual checks, just ones like, “Is this formula really as elegant as I want it to be?” instead.
Very interesting. There’s a lot of truth in what you say. If anyone reading this can link to experiments or even experimental designs that try to figure out when people typically rely on tolerances vs. utilities, I’d greatly appreciate it.
To put it another way, it’s unlikely that you’ll actually weigh price, [nutrition], and taste, in some sort of unified scoring system.
Y’know, most people probably don’t, and at times I certainly do take actions based entirely on nested tolerance-satisfaction. When I’m consciously aware that I’m making a decision, though, I tend to weigh the utilities, even for a minor choice like ice cream flavor. This may be part of why I felt estranged enough from modern society in the first place to want to participate in a blog like Less Wrong.
Even when we heavily engage our logical minds in search of “optimum” solutions, … each choice is also being checked against a, “can I be certain I’ve made the best choice yet?” perception.
OK, so you’ve hit on the behavioral mechanism that helps me decide how much time I want to spend on a decision...90 seconds or so is usually the upper bound on how much time I will comfortably and casually spend on selecting an ice cream flavor. If I take too much time to decide, then my “overthinking” tolerance is exceeded and alarm bells go off; if I feel too uncertain about my decision, then my “uncertainty” tolerance is exceeded and alarm bells go off; if neither continuing to think about ice cream nor ending my thoughts about ice cream will silence both alarm bells, then I feel stress and broaden my tolerance and try to avoid the situation in the future, probably by hiring a really good psychotherapist.
But that’s just the criteria for how long to think...not for what to think about. While I’m thinking about ice cream, I really am trying to maximize my ice-cream-related world-state-utility. I suspect that other people, for somewhat more important decisions, e.g., what car shall I buy, behave the same way—it seems a bit cynical to me to say that people make the decision to buy a car because they’ve concluded that their car-buying analysis is sufficiently elegant; they probably buy the car or walk out of the dealership when they’ve concluded that the action will very probably significantly improve their car-related world-state-utility.
I really am trying to maximize my ice-cream-related world-state-utility
And how often, while doing this, do you invent new ice cream options in an effort to increase the utility beyond that offered by the available choices?
How many new ice cream flavors have you invented, or decided to ask for mixed together?
So now you say, “Ah, but it would take too long to do those things.” And I say, “Yep, there goes another asynchronous prediction of an exceeded perceptual tolerance.”
“Okay,” you say, “so, I’m a bounded utility calculator.”
“Really? Okay, what scoring system do you use to arrive at a combined rating on all these criteria that you’re using? Do you even know what criteria you’re using?”
Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?
The experimental data says that when it comes to making these estimates, your brain is massively subject to priming and anchoring effects—so your “utility” being some kind of rational calculation is probably illusory to start with.
It seems a bit cynical to me to say that people make the decision to buy a car because they’ve concluded that their car-buying analysis is sufficiently elegant;
I was referring to the perceptions involved in a task like computer programming, not car-buying.
Part of the point is that every task has its own set of regulating perceptions.
they probably buy the car or walk out of the dealership when they’ve concluded that the action will very probably significantly improve their car-related world-state-utility.
They do it when they find a car that leads to an”acceptable “satisfaction” level.
Part of my point about things like time, elegance, “best”-ness, etc. though, is that they ALL factor into what “acceptable” means.
“Satisfaction”, in other words, is a semi-prioritized measurement against tolerances on ALL car-buying-related perceptual predictions that get loaded into a person’s “working memory” during the process.
Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?
Aside: I have partaken of the garlic ice-cream, and lo, it is good.
I’m not joking, either about its existence or its gustatory virtues. I’m trying to remember where the devil I had it; ah yes, these fine folks served it at Taste of Edmonton (a sort of outdoor food-fair with samples from local restaurants).
I’m not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.
Obviously we aren’t rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don’t dispute its validity. Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.
As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I’ve already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is “illusory.”
Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me.
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
does not in any way convince me that my attempt to consult my own utility is “illusory.”
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)
The obvious way to combine the two systems—tolerance and utility—is to say that stimuli that exceed our tolerances prompt us to ask questions about how to solve a problem, and utility calculations answer those questions. This is not an original idea on my part, but I do not remember where I read about it.
The one that appears to maximize utility after a brief period of analysis. For example, I want ice cream; my ice cream satisfaction index is well below tolerance. Fortunately, I am in an ice cream parlor, which carries several flavors. I will briefly reflect on which variety maximizes my utility, which in this case is mostly defined by price, taste, and nutrition, and then pick a flavor that returns a high (not necessarily optimal) value for that utility.
A lack of acceptable alternatives leads to stress, which (a) broadens the range of acceptable outcomes, and (b) motivates future analysis about how to avoid similar situations in the future. For example, I want ice cream; my ice cream satisfaction index is well below tolerance; unfortunately, I am in the desert. I find this situation unpleasant, and eventually reconcile myself to the fact that my ice cream satisfaction level will remain below what was previously thought of as ‘minimum’ tolerance for some time, however, upon returning to civilization, I will have a lower tolerance for ‘desert-related excursions’ and may attempt to avoid further trips through the desert.
Note that ‘minimum’ tolerance refers to the minimum level that will lead to casual selection of an acceptable alternative, rather than the minimum level that allows my decision system to continue functioning.
Actually, I’d tend to say that you are not so much maximizing the utility of your ice cream choice, as you are ensuring that your expected satisfaction with your choice is within tolerance.
To put it another way, it’s unlikely that you’ll actually weigh price, cost, and taste, in some sort of unified scoring system.
Instead, what will happen is that you’ll consider options that aren’t already ruled out by cached memories (e.g. you hate that flavor), and then predict whether that choice will throw any other variables out of tolerance. i.e., “this one costs too much… those nuts will give me indigestion… that’s way too big for my appetite… this one would taste good, but it just doesn’t seem like what I really want...”
Yes, some people do search for the “best” choice in certain circumstances, and would need to exhaustively consider the options in those cases. But this is not a matter of maximizing some world-state-utility, it is simply that each choice is also being checked against a, “can I be certain I’ve made the best choice yet?” perception.
Even when we heavily engage our logical minds in search of “optimum” solutions, this cognition is still primarily guided by these kinds of asynchronous perceptual checks, just ones like, “Is this formula really as elegant as I want it to be?” instead.
Very interesting. There’s a lot of truth in what you say. If anyone reading this can link to experiments or even experimental designs that try to figure out when people typically rely on tolerances vs. utilities, I’d greatly appreciate it.
Y’know, most people probably don’t, and at times I certainly do take actions based entirely on nested tolerance-satisfaction. When I’m consciously aware that I’m making a decision, though, I tend to weigh the utilities, even for a minor choice like ice cream flavor. This may be part of why I felt estranged enough from modern society in the first place to want to participate in a blog like Less Wrong.
OK, so you’ve hit on the behavioral mechanism that helps me decide how much time I want to spend on a decision...90 seconds or so is usually the upper bound on how much time I will comfortably and casually spend on selecting an ice cream flavor. If I take too much time to decide, then my “overthinking” tolerance is exceeded and alarm bells go off; if I feel too uncertain about my decision, then my “uncertainty” tolerance is exceeded and alarm bells go off; if neither continuing to think about ice cream nor ending my thoughts about ice cream will silence both alarm bells, then I feel stress and broaden my tolerance and try to avoid the situation in the future, probably by hiring a really good psychotherapist.
But that’s just the criteria for how long to think...not for what to think about. While I’m thinking about ice cream, I really am trying to maximize my ice-cream-related world-state-utility. I suspect that other people, for somewhat more important decisions, e.g., what car shall I buy, behave the same way—it seems a bit cynical to me to say that people make the decision to buy a car because they’ve concluded that their car-buying analysis is sufficiently elegant; they probably buy the car or walk out of the dealership when they’ve concluded that the action will very probably significantly improve their car-related world-state-utility.
And how often, while doing this, do you invent new ice cream options in an effort to increase the utility beyond that offered by the available choices?
How many new ice cream flavors have you invented, or decided to ask for mixed together?
So now you say, “Ah, but it would take too long to do those things.” And I say, “Yep, there goes another asynchronous prediction of an exceeded perceptual tolerance.”
“Okay,” you say, “so, I’m a bounded utility calculator.”
“Really? Okay, what scoring system do you use to arrive at a combined rating on all these criteria that you’re using? Do you even know what criteria you’re using?”
Is this utility fungible? I mean, would you eat garlic ice cream if it were free? Would you eat it if they paid you? How much would they need to pay you?
The experimental data says that when it comes to making these estimates, your brain is massively subject to priming and anchoring effects—so your “utility” being some kind of rational calculation is probably illusory to start with.
I was referring to the perceptions involved in a task like computer programming, not car-buying.
Part of the point is that every task has its own set of regulating perceptions.
They do it when they find a car that leads to an”acceptable “satisfaction” level.
Part of my point about things like time, elegance, “best”-ness, etc. though, is that they ALL factor into what “acceptable” means.
“Satisfaction”, in other words, is a semi-prioritized measurement against tolerances on ALL car-buying-related perceptual predictions that get loaded into a person’s “working memory” during the process.
Aside: I have partaken of the garlic ice-cream, and lo, it is good.
Are you joking? I’m curious!
I’m not joking, either about its existence or its gustatory virtues. I’m trying to remember where the devil I had it; ah yes, these fine folks served it at Taste of Edmonton (a sort of outdoor food-fair with samples from local restaurants).
Theory: you don’t actually enjoy garlic ice cream. You just pretend to in order to send an expensive signal that you are not a vampire.
If I ever encounter it I shall be sure to have a taste!
I’m not going to respond point for point, because my interest in whether we make decisions based on tolerances or utilities is waning, because I believe that the distinction is largely one of semantics. You might possibly convince me that more than semantics are at stake, but so far your arguments have been of the wrong kind in order to do so.
Obviously we aren’t rational utility-maximizers in any straightforward early-20th-century sense; there is a large literature on heuristics and biases, and I don’t dispute its validity. Still, there’s no reason that I can see why it must be the case that we exclusively weigh options in terms of tolerances and feedback rather than a (flawed) approach to maximizing utility. Either procedure can be reframed, without loss, in terms of the other, or at least so it seems to me. Your fluid and persuasive and persistent rephrasing of utility in terms of tolerance does not really change my opinion here.
As for ice cream flavors, I find that the ingenuity of chefs in manufacturing new ice cream flavors generally keeps pace with my ability to conceive of new flavors; I have not had to invent recipes for Lychee sorbet or Honey Mustard ice cream because there are already people out there trying to sell it to me. I often mix multiple flavors, syrups, and toppings. I would be glad to taste garlic ice cream if it were free, but expect that it would be unpleasant enough that I would have to be paid roughly $5 an ounce to eat it, mainly because I am counting calories and would have to cut out other foods that I enjoy more to make room for the garlic. As I’ve already admitted, though, I am probably not a typical example. The fact that my estimate of $5/oz is almost certainly biased, and is made with so little confidence that a better estimate of what you would have to pay me to eat it might be negative $0.50/oz to positive $30/oz, does not in any way convince me that my attempt to consult my own utility is “illusory.”
It does not seem so to me, unless you recapitulate/encapsulate the tolerance framework into the utility function, at which point the notion of a utility function has become superfluous.
The point here isn’t that humans can’t do utility-maximization, it’s merely that we don’t, unless we have made it one of our perceptual-tolerance goals. So, in weighing the two models, we see one model that humans in principle can do (but mostly don’t) and one that models what we mostly do, and can also model the flawed way of doing the other, that we actually do as well.
Seems like a slam dunk to me, at least if you’re looking to understand or model humans’ actual preferences with the simplest possible model.
The only thing I’m saying is illusory is the idea that utility is context-independent, and totally ordered without reflection.
(One bit of non-”semantic” relevance here is that we don’t know whether it’s even possible for a superintelligence to compute your “utility” for something without actually running a calculation that amounts to simulating your consciousness! There are vast spaces in all our “utility functions” which are indeterminate until we actually do the computations to disambiguate them.)