Expected performance is what rational agents are actually maximising.
Whether that corresponds to actual performance depends on what their expectations are. What their expectations are typically depends on their history—and the past is not necessarily a good guide to the future.
Highly rational agents can still lose. Rational actions (that follow the laws of induction and deduction applied to their sense data) are not necessarily the actions that win.
Rational agents try to win—and base their efforts on their expectations. Whether they actually win depends on whether their expectations are correct. In my view, attempts to link rationality directly to “winning” miss the distinction between actual and expected utility.
There are reasons for associations between expected performance and actual performance. Indeed, those associations are why agents have the expectations they do. However, the association is statistical in nature.
Dissect the brain of a rational agent, and it is its expected utility that is being maximised. Its actual utility is usually not something that is completely under its control.
It’s important not to define the “rational action” as “the action that wins”. Whether an action is rational or not should be a function of an agent’s sense data up to that point—and should not vary depending on environmental factors which the agent knows nothing about. Otherwise, the rationality of an action is not properly defined from an agent’s point of view.
I don’t think that the excuses humans use for failures is an issue here.
Behaving rationally is not the only virtue needed for success. For example, you also need to enter situations with appropriate priors.
Only if you want rationality to be the sole virtue, should “but I was behaving rationally” be the ultimate defense against an inquisition.
Rationality is good, but to win, you also need effort, persistence, good priors, etc—and it would be very, very bad form to attempt to bundle all those into the notion of being “rational”.
Expected performance is what rational agents are actually maximising.
Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout? As Nesov says, rationality is about utility; which is why a rational agent in fact maximizes their expectation of utility, while trying to maximize utility (not their expectation of utility!).
It may help to understand this and some of the conversations below if you realize that the word “try” behaves a lot like “quotation marks” and that having an extra “pair” of quotation “marks” can really make “your” sentences seem a bit odd.
I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don’t mean “the amount of utility they expect to win”, but expectation in the technical sense—ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let’s say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY’s point is that if a rational agent’s only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it’s impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It’s like if I’m taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn’t 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that’s not their true goal.
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities—and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents heads can be accurately modelled as calculating expected utilities—and then selecting the action that corresponds to the largest of these.
Agents are better modelled as Expected Utility Maximisers than as Utility Maximisers. Whether an Expected Utility Maximiser actually maximises utility depends on whether it is in an environment where its expectations pan out.
Expected performance is what rational agents are actually maximising.
Whether that corresponds to actual performance depends on what their expectations are. What their expectations are typically depends on their history—and the past is not necessarily a good guide to the future.
Highly rational agents can still lose. Rational actions (that follow the laws of induction and deduction applied to their sense data) are not necessarily the actions that win.
Rational agents try to win—and base their efforts on their expectations. Whether they actually win depends on whether their expectations are correct. In my view, attempts to link rationality directly to “winning” miss the distinction between actual and expected utility.
There are reasons for associations between expected performance and actual performance. Indeed, those associations are why agents have the expectations they do. However, the association is statistical in nature.
Dissect the brain of a rational agent, and it is its expected utility that is being maximised. Its actual utility is usually not something that is completely under its control.
It’s important not to define the “rational action” as “the action that wins”. Whether an action is rational or not should be a function of an agent’s sense data up to that point—and should not vary depending on environmental factors which the agent knows nothing about. Otherwise, the rationality of an action is not properly defined from an agent’s point of view.
I don’t think that the excuses humans use for failures is an issue here.
Behaving rationally is not the only virtue needed for success. For example, you also need to enter situations with appropriate priors.
Only if you want rationality to be the sole virtue, should “but I was behaving rationally” be the ultimate defense against an inquisition.
Rationality is good, but to win, you also need effort, persistence, good priors, etc—and it would be very, very bad form to attempt to bundle all those into the notion of being “rational”.
Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout? As Nesov says, rationality is about utility; which is why a rational agent in fact maximizes their expectation of utility, while trying to maximize utility (not their expectation of utility!).
It may help to understand this and some of the conversations below if you realize that the word “try” behaves a lot like “quotation marks” and that having an extra “pair” of quotation “marks” can really make “your” sentences seem a bit odd.
I’m not sure I get this at all.
I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don’t mean “the amount of utility they expect to win”, but expectation in the technical sense—ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let’s say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY’s point is that if a rational agent’s only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it’s impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It’s like if I’m taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn’t 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that’s not their true goal.
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities—and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents heads can be accurately modelled as calculating expected utilities—and then selecting the action that corresponds to the largest of these.
Agents are better modelled as Expected Utility Maximisers than as Utility Maximisers. Whether an Expected Utility Maximiser actually maximises utility depends on whether it is in an environment where its expectations pan out.
I am inclined to argue along exactly the same lines as Tim, though I worry there is something I am missing.