But if you adjust the bets for the utility, then, if you’re a perfect utilitarian, you should chose the highest expectancy, regardless of the risk involved. Between being sure of getting 10 utilons and having a 0.1 chance of getting 101 utilons (and 0.9 chance to get nothing), you should chose to take the bet. Or you’re not rational, says dvasya.
It’s not “or you’re not rational.” It’s “or you haven’t measured your utility function correctly.” If you don’t pick the option with higher expected utility, it’s not actually utility.
We have a limited power for making computation. The first problem of taking a risk is that it’ll make all further computations much harder.
So put that in your utility function. The certainty effect is not always a bias.
Utility function is supposed to contain only terminal values. You’re not supposed to factor instrumental values into your utility function. It’s your optimization algorithm which is supposed to consider instrumental values in they help to maximize utility, but they shouldn’t be part of utility function for themselves.
What you want to “put in your utility function” is… the effect of choices on your ability to estimate and optimize your utility function. That’s making the utility function recursive, building a “strange loop to the meta level” between your utility and the optimization algorithm which is supposed to maximize the utility function. And I don’t see any reason (but maybe there are) why that recursion should converge and be computable in finite time.
What you want to “put in your utility function” is… the effect of choices on your ability to estimate and optimize your utility function. That’s making the utility function recursive, building a “strange loop to the meta level” between your utility and the optimization algorithm which is supposed to maximize the utility function. And I don’t see any reason (but maybe there are) why that recursion should converge and be computable in finite time.
But (essentially to repeat a point) it would be a bias, since the adjustment is based on risk, whereas it should (’assuming everything else) be based on uncertainty (risk multiplied by the length of time the result is unknown). But even if the adjustment were based on the relevant factor, it would still be a bias because the adjustment should concern not only the time but on the chances that relevant decisions will be required in the interval.
A separate point—One topic that should be considered in evaluating the argument further is whether other decision problems introduce the same “strange loops.”
Utility function is supposed to contain only terminal values. You’re not supposed to factor instrumental values into your utility function.
Utility functions are typically defined over expected futures. A feature of that future is how many seconds and calories you spent making decisions (and thus not doing other things). And so if a gamble will give you either zero or a hundred calories, but take fifty calories to recompute all of your plans that depend on whether or not you win the gamble, then it’s actually a gamble between −50 and 50 calories, not 0 and 100.
In short, utility functions should take terminal values as inputs, but those terminal values depend on instrumental values, and your utility function should respond to that dependence.
If you don’t pick the option with higher expected utility, it’s not actually utility.
The point is that we may have utility functions where u(p1A+p2B) != p1u(A)+p2u(B). That is, the utility of a bet may not be equal to the expected value of the utility of the outcomes.
The point is that we may have utility functions where u(p1A+p2B) != p1u(A)+p2u(B).
I am well aware. That’s only the case for linear, i.e. ‘risk neutral’, utility functions.
The thing is, utility is defined as the thing you are risk neutral with respect to. If you’re not risk neutral with respect to it, then it’s not utility.
It’s not “or you’re not rational.” It’s “or you haven’t measured your utility function correctly.” If you don’t pick the option with higher expected utility, it’s not actually utility.
So put that in your utility function. The certainty effect is not always a bias.
There are two problems with that.
Utility function is supposed to contain only terminal values. You’re not supposed to factor instrumental values into your utility function. It’s your optimization algorithm which is supposed to consider instrumental values in they help to maximize utility, but they shouldn’t be part of utility function for themselves.
What you want to “put in your utility function” is… the effect of choices on your ability to estimate and optimize your utility function. That’s making the utility function recursive, building a “strange loop to the meta level” between your utility and the optimization algorithm which is supposed to maximize the utility function. And I don’t see any reason (but maybe there are) why that recursion should converge and be computable in finite time.
But (essentially to repeat a point) it would be a bias, since the adjustment is based on risk, whereas it should (’assuming everything else) be based on uncertainty (risk multiplied by the length of time the result is unknown). But even if the adjustment were based on the relevant factor, it would still be a bias because the adjustment should concern not only the time but on the chances that relevant decisions will be required in the interval.
A separate point—One topic that should be considered in evaluating the argument further is whether other decision problems introduce the same “strange loops.”
Utility functions are typically defined over expected futures. A feature of that future is how many seconds and calories you spent making decisions (and thus not doing other things). And so if a gamble will give you either zero or a hundred calories, but take fifty calories to recompute all of your plans that depend on whether or not you win the gamble, then it’s actually a gamble between −50 and 50 calories, not 0 and 100.
In short, utility functions should take terminal values as inputs, but those terminal values depend on instrumental values, and your utility function should respond to that dependence.
The point is that we may have utility functions where u(p1A+p2B) != p1u(A)+p2u(B). That is, the utility of a bet may not be equal to the expected value of the utility of the outcomes.
I am well aware. That’s only the case for linear, i.e. ‘risk neutral’, utility functions.
The thing is, utility is defined as the thing you are risk neutral with respect to. If you’re not risk neutral with respect to it, then it’s not utility.