I would think that an ideal rationalist’s mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.
Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there’s a curve, I’ll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).
My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I’d checked the grade is 1.7 + 0.2 = 1.9.
If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I’m risk-neutral).
Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.
Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.
In this case, the fact that we know the actual grade stands to be misleading, since it’s liable to make any probability distribution that doesn’t provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.
I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as “enjoying the party,” 1 or 0. I could do lots of other tweaks.
But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I’m not about to fail an easy exam in a couple of days just to see what my utility function would do.
Conservation of expected evidence means that ideally, you can’t expect the introduction of new evidence to affect your expected utility. In practice, that’s probably not the case, but humans aren’t even rough approximations of ideal rationalists.
Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there’s a curve, I’ll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).
My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I’d checked the grade is 1.7 + 0.2 = 1.9.
If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I’m risk-neutral).
Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.
Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.
In this case, the fact that we know the actual grade stands to be misleading, since it’s liable to make any probability distribution that doesn’t provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.
I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as “enjoying the party,” 1 or 0. I could do lots of other tweaks.
But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I’m not about to fail an easy exam in a couple of days just to see what my utility function would do.
Conservation of expected evidence means that ideally, you can’t expect the introduction of new evidence to affect your expected utility. In practice, that’s probably not the case, but humans aren’t even rough approximations of ideal rationalists.