Expected performance is what rational agents are actually maximising.
Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout? As Nesov says, rationality is about utility; which is why a rational agent in fact maximizes their expectation of utility, while trying to maximize utility (not their expectation of utility!).
It may help to understand this and some of the conversations below if you realize that the word “try” behaves a lot like “quotation marks” and that having an extra “pair” of quotation “marks” can really make “your” sentences seem a bit odd.
I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don’t mean “the amount of utility they expect to win”, but expectation in the technical sense—ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let’s say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY’s point is that if a rational agent’s only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it’s impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It’s like if I’m taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn’t 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that’s not their true goal.
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities—and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents heads can be accurately modelled as calculating expected utilities—and then selecting the action that corresponds to the largest of these.
Agents are better modelled as Expected Utility Maximisers than as Utility Maximisers. Whether an Expected Utility Maximiser actually maximises utility depends on whether it is in an environment where its expectations pan out.
Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout? As Nesov says, rationality is about utility; which is why a rational agent in fact maximizes their expectation of utility, while trying to maximize utility (not their expectation of utility!).
It may help to understand this and some of the conversations below if you realize that the word “try” behaves a lot like “quotation marks” and that having an extra “pair” of quotation “marks” can really make “your” sentences seem a bit odd.
I’m not sure I get this at all.
I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don’t mean “the amount of utility they expect to win”, but expectation in the technical sense—ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let’s say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY’s point is that if a rational agent’s only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it’s impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It’s like if I’m taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn’t 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that’s not their true goal.
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities—and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents heads can be accurately modelled as calculating expected utilities—and then selecting the action that corresponds to the largest of these.
Agents are better modelled as Expected Utility Maximisers than as Utility Maximisers. Whether an Expected Utility Maximiser actually maximises utility depends on whether it is in an environment where its expectations pan out.