Re-evaluate old beliefs
I’ve noticed that, although people can become more rational, they don’t win noticeably more. We usually re-calibrate our self-confidence, become more stubborn, and make bigger errors.
Is it possible that the benefit from increasing your prediction accuracy is no greater than the loss incurred from taking riskier bets due to greater self-confidence?
Model 1: Static (or, constant re-evaluation of all beliefs)
For any yes/no question q, person i has probability pi of being right. Suppose person i always knows the true value of pi.
Suppose, without loss of generality, that we consider only questions where person i decides the answer is “yes”. (The situation is symmetric for cases where they answer “no”, so we will get the same results considering only the “yes” questions.)
Suppose that, for every question q, there is some current odds offered by the world, described by society’s accepted value y for P(q). Society sets odds to have an expected profit near zero. So the bet will cost the player C if q is false, and will give a payout D if q is true, such that y(D) = (1-y)C. Set D = 1-y, C = y.
Person i then takes the bet for each question q for which i thinks q is true, and Dpi > C(1-pi), or (1-y)pi > y(1-pi), 1/y—y/y > 1/p—p/p, y < p. As they are right with probability pi, they win pi of these bets.
Suppose y is distributed uniformly from zero to one. This is the most-suspicious assumption.
Person i’s profit F(pi) is their payouts minus their losses. So F(pi) is the integral, from y = 0 to pi, of [(1-y)pi—y(1-pi) = pi—y]. This integral equals pi2 / 2.
Good! Your winnings should be proportional to the square of your accuracy!
But that doesn’t seem to match what I observe.
Model 2: No re-evaluation of old beliefs
Now suppose that a person improves their accuracy pi over time, and assumes that pi is the accuracy for all their beliefs, but doesn’t constantly re-evaluate all old beliefs. In this model, time t will be equal to pi (representing linear increase in accuracy, which may or may not be realistic, but is simple), ranging from p = p0 to pi, over time t=p0 to t=pi.
At time t, person i will take all bets with y < t for all their beliefs, although some of these beliefs were formed earlier, when person i had less accuracy. Their profit F(i, t) is now the integral, from x = p0 to t (x representing the time a belief was formed and its probability of being correct), of the integral, from y = 0 to t, of [(1-y)x—y(1-x) = x—y]. The inner integral evaluates to xt—tt/2, the outer integral evaluates to pip0(pi-p0)/2, and we will divide the whole mess by (pi-p0) to normalize for the number of chances person i had to place bets.
Now, the expected profit is pip0/2. This is only linear in person i’s current accuracy. Perhaps more interesting, if we set p0 = 0, expected profit is always zero. The new bets taken based on new, more accurate beliefs are exactly balanced out by the losses from bets taken at increasingly worse odds on old, inaccurate beliefs.
p0 ~ 0 is not as unreasonable as it sounds. This is because we are not presented in life with a random sample of all possible questions. We are presented with questions that have been filtered by other peoples’ uncertainty about these questions. Questions on which an answer is commonly accepted get that answer worked into the fabric of society (so that the decision is usually made for you), and then get ignored. I expect for that reason that the average person has pi ~ .5, and some people have pi << .5.
This model is no good, because it doesn’t model different difficulties of different questions, and relate the odds offered to the difficulty of the question. If the payout on a bet is very high, and person i wants to take the bet, it means most people disagree with i. That should mean that i is more likely to be wrong on that bet.
Can anyone suggest a good way of doing that? I’m thinking along the lines of using some kind of least-square error to minimize the total combined errors of the person and of society in placing bets. Or, start all over, suppose a distribution of question probabilities, and suppose that both society and person i have some error distribution.
My intuition is that:
Taking this into account will reduce the expected payoff from quadratic to linear, and obliterate it in the “no re-evaluation” case.
i will lose or break even on increasing rationality if i models their accuracy rate as being the same on all questions, instead of adjusting for the information provided in the odds offered.
I also think there’s a problem with having a uniform distribution for y. That means there are few long-odds bets offered.