I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don’t mean “the amount of utility they expect to win”, but expectation in the technical sense—ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let’s say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY’s point is that if a rational agent’s only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it’s impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It’s like if I’m taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn’t 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that’s not their true goal.
I’m not sure I get this at all.
I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Presumably you take the bet right? Because your expected return is £50 - surely this is the sense in which rationalists maximise expected utility. We don’t mean “the amount of utility they expect to win”, but expectation in the technical sense—ie, the product of the likelihood of various events happening with their utility in the univserses in which those events happen (or probably more properly an integral...)
If you expect to lose £50 and you are wrong, that doesn’t actually say anything about the expectation of your winnings.
It does, however, say something about your expectation of your winnings. Expectation can be very knowledge dependent. Let’s say someone rolls two six sided dice, and then offers you a bet where you win $100 if the sum of the dice is less than 5, but lose $10 if the sum is greater than 5. You might perform various calculations to determine your expected value of accepting the bet. But if I happen to peak and see one of the dice has landed on 6, then I will calculate a different expected value than you will.
So we have different expected values for calculating the bet, because we have different information.
So EY’s point is that if a rational agent’s only purpose was to maximize (their) expected utility, they could easily do this by selectively ignoring information, so that their calculations turn out a specific way.
But actually rational agents are not interested in maximizing (their) expected utility. They are interested in maximizing real utility. Except it’s impossible to do this without perfect information, and so what agents end up doing is maximizing expected utility, although they are trying to maximize real utility.
It’s like if I’m taking a history exam in school. I am trying to achieve 100% on the exam, but end up instead achieving only 60% because I have imperfect information. My goal wasn’t 60%, it was 100%. But the actual actions I took (the answers I selected) led to to arrive at 60% instead of my true goal.
Rational agents are trying to maximize real utility, but end up maximizing expected utility (by definition), even though that’s not their true goal.