Downvoted. I’m sorry to be so critical, but this is the prototypical LW mischaracterization of utility functions. I’m not sure where this comes from, when the VNM theorem gets so many mentions on LW.
A utility function is, by definition, that which the corresponding rational agent maximizes the expectation of, by choosing among its possible actions. It is not “optimal as the number of bets you take approaches infinity”: first, it is not ‘optimal’ in any reasonable sense of the word, as it is simply an encoding of the actions which a rational agent would take in hypothetical scenarios; and second, it has nothing to do with repeated actions or bets.
Humans do not have utility functions. We do not exhibit the level of counterfactual self-consistency that is required by a utility function.
The term “utility” used in discussions of utilitarianism is generally vaguely-defined and is almost never equivalent to the “utility” used in game theory and related fields. I suspect that is the source of this never-ending misconception about the nature of utility functions.
Yes, it is common, especially on LW and in discussions of utilitarianism, to use the term “utility” loosely, but don’t conflate that with utility functions by creating a chimera with properties from each. If the “utility” that you want to talk about is vaguely-defined (e.g., if it depends on some account of subjective preferences, rather than on definite actions under counterfactual scenarios), then it probably lacks all of useful mathematical properties of utility functions, and its expectation is no longer meaningful.
I’m not sure where this comes from, when the VNM theorem gets so many mentions on LW.
I understand the VNM theorem. I’m objecting to it.
A utility function is, by definition, that which the corresponding rational agent maximizes the expectation of
If you want to argue “by definition”, then yes, according to your definition utility functions can’t be used in anything other than expected utility. I’m saying that’s silly.
simply an encoding of the actions which a rational agent would take in hypothetical scenarios
Not all rational agents, as my post demonstrates. An agent following median maximizing would not be describable by any utility function maximized with expected utility. I showed how to generalize this to describe more kinds of rational agents. Regular expected utility becomes a special case of this system. I think generalizing existing ideas and mathematics is a desirable thing sometimes.
It is not “optimal as the number of bets you take approaches infinity”
Yes, it is. If you assign some subjective “value” to different outcomes, and to different things, then maximizing expected utility value, will maximize it, as the number of decisions approaches infinity. For every bet I lose at certain odds, I will gain more from others some predictable percent of the time. On average it cancels out.
This might not be the standard way of explaining expected utility, but it’s very simple and intuitive, and shows exactly where the problem is. It’s certainly sufficient for the explanation in my post.
Humans do not have utility functions. We do not exhibit the level of counterfactual self-consistency that is required by a utility function.
That’s quite irrelevant. Sure humans are irrational and make inconsistencies and errors in counterfactual situations. We should strive to be more consistent though. We should strive to figure out the utility function that most represents what we want. And if we program an AI, we certainly want it to behave consistently.
Yes, it is common, especially on LW and in discussions of utilitarianism, to use the term “utility” loosely, but don’t conflate that with utility functions by creating a chimera with properties from each. If the “utility” that you want to talk about is vaguely-defined (e.g., if it depends on some account of subjective preferences, rather than on definite actions under counterfactual scenarios), then it probably lacks all of useful mathematical properties of utility functions, and its expectation is no longer meaningful.
Again, back to arguing by definition. I don’t care what the definition of “utility” is. If it would please you to use a different word, then we can do so. Maybe “value function” or something. I’m trying to come up with a system that will tell us what decisions we should make, or program an AI to make. One that fits our behavior and preferences the best. One that is consistent and converges to some answer given a reasonable prior.
You haven’t made any arguments against my idea or my criticisms of expected utility. It’s just pedantry about the definition of a word, when it’s meaning in this context is pretty clear.
You’re missing VincentYu’s point, which is also a point I have made to you earlier: the utility function in the conclusion of the VNM theorem is not the same as a utility function that you came up with a completely different way, like by declaring linearity with respect to number of lives.
If you assign some subjective “value” to different outcomes, and to different things, then maximizing expected utility value, will maximize it, as the number of decisions approaches infinity. For every bet I lose at certain odds, I will gain more from others some predictable percent of the time. On average it cancels out.
This might not be the standard way of explaining expected utility, but it’s very simple and intuitive, and shows exactly where the problem is. It’s certainly sufficient for the explanation in my post.
This is an absurd strawman that has absolutely nothing to do with the motivation for EU maximization.
You’re missing VincentYu’s point, which is also a point I have made to you earlier: the utility function in the conclusion of the VNM theorem is not the same as a utility function that you came up with a completely different way, like by declaring linearity with respect to number of lives.
I discussed this in my post. I know VNM is indifferent to what utility function you use. I know the utility function doesn’t have to be linear. But I showed that no transformation of it fixes the problems or produces the behavior we want.
This is an absurd strawman that has absolutely nothing to do with the motivation for EU maximization.
It’s not a strawman! I know there are multiple ways of deriving EU. If you derive it a different way, that’s fine. It doesn’t affect any of my arguments whatsoever.
But I showed that no transformation of it fixes the problems or produces the behavior we want.
No, you only tried two: linearity, and a bound that’s way too low.
It’s not a strawman! I know there are multiple ways of deriving EU. If you derive it a different way, that’s fine. It doesn’t affect any of my arguments whatsoever.
You picked a possible defense of EU maximization that no one ever uses to defend EU maximization, because it is stupid and therefore easy for you to criticize. That’s what a strawman is. You use your argument against this strawman to criticize EU maximization without addressing the real motivations behind it, so it absolutely does affect your arguments.
It’s a word you use with wild abandon. If you want to communicate (as opposed to just spill a mind dump onto a page), you should care because otherwise people will not understand what you are trying to say.
I’m trying to come up with a system that will tell us what decisions we should make
There are a lot of those, starting with WWJD and ending with emulating nature that is red in tooth and claw. The question is on which basis will you prefer a system over another one.
It’s a word you use with wild abandon. If you want to communicate (as opposed to just spill a mind dump onto a page), you should care because otherwise people will not understand what you are trying to say.
Everyone except VincentYu seems to understand what I’m saying. I do not understand where people are getting confused. The word “utility” has more meanings than “that thing which is produced by the VNM axioms”.
The question is on which basis will you prefer a system over another one.
The preference should be too what extent it would make the same decisions you would. This post was the argue that expected utility doesn’t and can not do that. And to show some alternatives which might.
I do not understand where people are getting confused.
I just told you.
If you want to understand where people are getting confused, perhaps you should listen to them.
The preference should be too what extent it would make the same decisions you would.
Huh? First, why would I need a system to make the same decisions I’m going to make by default? Second, who is that “you”? For particular values of “you”, building a system that replicates the preferences of that specific individual is going to be a really bad idea.
Downvoted. I’m sorry to be so critical, but this is the prototypical LW mischaracterization of utility functions. I’m not sure where this comes from, when the VNM theorem gets so many mentions on LW.
A utility function is, by definition, that which the corresponding rational agent maximizes the expectation of, by choosing among its possible actions. It is not “optimal as the number of bets you take approaches infinity”: first, it is not ‘optimal’ in any reasonable sense of the word, as it is simply an encoding of the actions which a rational agent would take in hypothetical scenarios; and second, it has nothing to do with repeated actions or bets.
Humans do not have utility functions. We do not exhibit the level of counterfactual self-consistency that is required by a utility function.
The term “utility” used in discussions of utilitarianism is generally vaguely-defined and is almost never equivalent to the “utility” used in game theory and related fields. I suspect that is the source of this never-ending misconception about the nature of utility functions.
Yes, it is common, especially on LW and in discussions of utilitarianism, to use the term “utility” loosely, but don’t conflate that with utility functions by creating a chimera with properties from each. If the “utility” that you want to talk about is vaguely-defined (e.g., if it depends on some account of subjective preferences, rather than on definite actions under counterfactual scenarios), then it probably lacks all of useful mathematical properties of utility functions, and its expectation is no longer meaningful.
I understand the VNM theorem. I’m objecting to it.
If you want to argue “by definition”, then yes, according to your definition utility functions can’t be used in anything other than expected utility. I’m saying that’s silly.
Not all rational agents, as my post demonstrates. An agent following median maximizing would not be describable by any utility function maximized with expected utility. I showed how to generalize this to describe more kinds of rational agents. Regular expected utility becomes a special case of this system. I think generalizing existing ideas and mathematics is a desirable thing sometimes.
Yes, it is. If you assign some subjective “value” to different outcomes, and to different things, then maximizing expected
utilityvalue, will maximize it, as the number of decisions approaches infinity. For every bet I lose at certain odds, I will gain more from others some predictable percent of the time. On average it cancels out.This might not be the standard way of explaining expected utility, but it’s very simple and intuitive, and shows exactly where the problem is. It’s certainly sufficient for the explanation in my post.
That’s quite irrelevant. Sure humans are irrational and make inconsistencies and errors in counterfactual situations. We should strive to be more consistent though. We should strive to figure out the utility function that most represents what we want. And if we program an AI, we certainly want it to behave consistently.
Again, back to arguing by definition. I don’t care what the definition of “utility” is. If it would please you to use a different word, then we can do so. Maybe “value function” or something. I’m trying to come up with a system that will tell us what decisions we should make, or program an AI to make. One that fits our behavior and preferences the best. One that is consistent and converges to some answer given a reasonable prior.
You haven’t made any arguments against my idea or my criticisms of expected utility. It’s just pedantry about the definition of a word, when it’s meaning in this context is pretty clear.
You’re missing VincentYu’s point, which is also a point I have made to you earlier: the utility function in the conclusion of the VNM theorem is not the same as a utility function that you came up with a completely different way, like by declaring linearity with respect to number of lives.
This is an absurd strawman that has absolutely nothing to do with the motivation for EU maximization.
I discussed this in my post. I know VNM is indifferent to what utility function you use. I know the utility function doesn’t have to be linear. But I showed that no transformation of it fixes the problems or produces the behavior we want.
It’s not a strawman! I know there are multiple ways of deriving EU. If you derive it a different way, that’s fine. It doesn’t affect any of my arguments whatsoever.
No, you only tried two: linearity, and a bound that’s way too low.
You picked a possible defense of EU maximization that no one ever uses to defend EU maximization, because it is stupid and therefore easy for you to criticize. That’s what a strawman is. You use your argument against this strawman to criticize EU maximization without addressing the real motivations behind it, so it absolutely does affect your arguments.
It’s a word you use with wild abandon. If you want to communicate (as opposed to just spill a mind dump onto a page), you should care because otherwise people will not understand what you are trying to say.
There are a lot of those, starting with WWJD and ending with emulating nature that is red in tooth and claw. The question is on which basis will you prefer a system over another one.
Everyone except VincentYu seems to understand what I’m saying. I do not understand where people are getting confused. The word “utility” has more meanings than “that thing which is produced by the VNM axioms”.
The preference should be too what extent it would make the same decisions you would. This post was the argue that expected utility doesn’t and can not do that. And to show some alternatives which might.
I just told you.
If you want to understand where people are getting confused, perhaps you should listen to them.
Huh? First, why would I need a system to make the same decisions I’m going to make by default? Second, who is that “you”? For particular values of “you”, building a system that replicates the preferences of that specific individual is going to be a really bad idea.
Building a reasonably comprehensible system that replicates the preferences of a specific individual could at least be somewhat enlightening.
Houshalter clearly wants not a descriptive, but a normative system.
You say you are rejecting Von Neumann utility theory. Which axiom are you rejecting?
https://en.wikipedia.org/wiki/Von_Neumann–Morgenstern_utility_theorem#The_axioms
The last time this came up, the answer was:
This is, as pointed out there, not one of the axioms.
The axiom of independence. I did mention this in the post.