Adams also frequently hedged his bets and even changed his prediction once the odds for Trump appeared too long to overcome. This is pretty much what you would expect from a charlatan.
I agree, but that isn’t what Adams did. Adams first claimed Trump is a master persuader who was virtually certain to win. When Trump was way down in the polls with only weeks left, Adams then switched to predicting a Clinton win, using the Trump controversy du jour as a rationale.
Updating on the evidence would have involved conceding that Trump isn’t actually an expert persuader (or conceding that persuasion skills don’t actually carry that much weight). In other words, he would have had to admit he was wrong. Instead, he acted like the Trump controversy of the time was something completely shocking and that was the only reason Trump was going to lose.
I want to be careful in how I talk about Adams. He definitely didn’t follow the guidelines for methodological forecasting, such as assigning clear numerical predictions and tracking a Brier (or any chosen) scoring method.
As a result I see two main groups of thought on Adams: The first is forecasting oracle. The second is total charlatan (as far as I can tell this is the Rationalist viewpoint, I know SSC took this view).
I think the rationalist viewpoint is close to right. If we include the set of all semi-famous people who did/could speculate on an election (including Adams), and then imagine (we don’t have the data) that we tracked all their predictions, with the knowledge that after the fact we would forget everyone who was wrong, Adams doesn’t seem significantly correct.
But if Adams (or an abstracted idea of Adams argument) were correct, It would be because unlike current polling methods it allows for really high-dimensional data to be embedded into the forecast. As of now humans seem to be much better at getting a ‘feel’ for a movement than computers, because it requires using vast unrelated and unstructured data, which we specifically evolved to do* (I know we don’t have great experiments to determine what we did/didn’t specifically evolve for, so ignore this point if you want).
So, to that extent, current purely model-based election forecasts are at risk of having a severe form of omitted variable bias.
As an example, while polls are a little stable, Marine Le Pen is currently at a huge disadvantage:
“According to a BVA poll carried out between Oct. 14 and Oct. 19, Le Pen would win between 25 percent and 29 percent of the vote in next April’s first round. If she faces Bordeaux mayor Alain Juppe—the favorite to win the Republicans primary—she’d lose the May 7 run-off by more than 30 percentage points. If it’s former President Nicolas Sarkozy, the margin would be 12 points.”*
And yet PredictIt.org has her at ~40%. There is strong prior information from Brexit/Trump that seems important, but is absent in polls. It’s almost as if we are predicting how people will change their mind when exposed to a ‘treatment effect’ of rightwing nationalism.
So then to tie this back to the original post, if you have stronger prior information, such as a strong reason to believe races will be 50-50, non-uniform priors, or that omitted variable bias exists, it would make sense to impose a structure on time-variation of the poll. I think this set of reasons is why it feels wrong to us when we see predictions varying so much far off from an election.
Predictit puts Le Pen at 40% (now down to 34%), but the much larger Betfair (orig) puts her at 22%. Generally you should quote Betfair because it is larger, because it doesn’t limit individuals. The only advantage of Predictit is that it is open to Americans, but that is probably only relevant to American elections.
Even Betfair’s prices only represent a million dollars worth of betting. $20k of betting after the American election moved Le Pen up to 40%. I don’t know how long it took to correct that, but clearly faster on Betfair than on Predictit. (And I don’t know whether the market changed its mind or incorporated the new information of the center-right primary.)
Thanks for the insight on the difference between Predictit/Betfair—I wasn’t aware of this liquidity difference. Although so long as there is a reasonable amount of liquidity on Predict it, it’s very strange the two are not in equilibrium. Do you know if there are any open theories as to why this is?
One thing I notice is a lot of commenters on PredictIt are alt-right/NRx. It seems unlikely, but I wonder if different ideological priors are pushing different prediction markets away from a common equilibrium probability.
We have the election estimate F a function of a state variable W, a Wiener process WLOG
That doesn’t look like a reasonable starting point to me.
Going back to the OP...
the process by which two candidates interact is highly dynamic and strategic with respect to the election date
Sure, but it’s very difficult to model.
it’s actually remarkable that elections are so incredibly close to 50-50
No, it’s not. In a two-party system each party adjusts until it can capture close to 50% of the votes. There is a feedback loop.
When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with?
I’m an arrogant git, so I accept them as bit worse :-P To quote an old expression, (historical-) data driven models are like driving while looking into a rearview mirror. Things will change. In this particular case, the Brexit vote showed that under right conditions people who do not normally vote (and so are ignored by historical-data models) will come out of the woodwork.
to know the true answer
Eh, the existence of a “true answer” is doubtful. If you have a random variable, is each instantiation of it a “true answer”? You end up with a lot of true answers...
We have the election estimate F a function of a state variable W, a Wiener process WLOG
That doesn’t look like a reasonable starting point to me.
That’s fine actually, if you assume your forecasts are continuous in time, then they’re continuous martingales and thus equivalent to some time-changed Wiener process. (EDIT: your forecasts need not be continuous, my bad.) The problem is that he doesn’t take into the time transformation when he claims that you need to weight your signal by 1/sqrt(t).
He also has a typo in his statement of Ito’s Lemma which might affect his derivation. I’ll check his math later.
Adams also frequently hedged his bets and even changed his prediction once the odds for Trump appeared too long to overcome. This is pretty much what you would expect from a charlatan.
Updating to changed evidence is no sign of a charlatan but behavior of good forecasters.
I agree, but that isn’t what Adams did. Adams first claimed Trump is a master persuader who was virtually certain to win. When Trump was way down in the polls with only weeks left, Adams then switched to predicting a Clinton win, using the Trump controversy du jour as a rationale.
Updating on the evidence would have involved conceding that Trump isn’t actually an expert persuader (or conceding that persuasion skills don’t actually carry that much weight). In other words, he would have had to admit he was wrong. Instead, he acted like the Trump controversy of the time was something completely shocking and that was the only reason Trump was going to lose.
I want to be careful in how I talk about Adams. He definitely didn’t follow the guidelines for methodological forecasting, such as assigning clear numerical predictions and tracking a Brier (or any chosen) scoring method.
As a result I see two main groups of thought on Adams: The first is forecasting oracle. The second is total charlatan (as far as I can tell this is the Rationalist viewpoint, I know SSC took this view).
I think the rationalist viewpoint is close to right. If we include the set of all semi-famous people who did/could speculate on an election (including Adams), and then imagine (we don’t have the data) that we tracked all their predictions, with the knowledge that after the fact we would forget everyone who was wrong, Adams doesn’t seem significantly correct.
But if Adams (or an abstracted idea of Adams argument) were correct, It would be because unlike current polling methods it allows for really high-dimensional data to be embedded into the forecast. As of now humans seem to be much better at getting a ‘feel’ for a movement than computers, because it requires using vast unrelated and unstructured data, which we specifically evolved to do* (I know we don’t have great experiments to determine what we did/didn’t specifically evolve for, so ignore this point if you want).
So, to that extent, current purely model-based election forecasts are at risk of having a severe form of omitted variable bias.
As an example, while polls are a little stable, Marine Le Pen is currently at a huge disadvantage: “According to a BVA poll carried out between Oct. 14 and Oct. 19, Le Pen would win between 25 percent and 29 percent of the vote in next April’s first round. If she faces Bordeaux mayor Alain Juppe—the favorite to win the Republicans primary—she’d lose the May 7 run-off by more than 30 percentage points. If it’s former President Nicolas Sarkozy, the margin would be 12 points.”*
And yet PredictIt.org has her at ~40%. There is strong prior information from Brexit/Trump that seems important, but is absent in polls. It’s almost as if we are predicting how people will change their mind when exposed to a ‘treatment effect’ of rightwing nationalism.
*http://www.bloomberg.com/news/articles/2016-11-16/french-pollsters-spooked-by-trump-but-still-don-t-see-le-pen-win
So then to tie this back to the original post, if you have stronger prior information, such as a strong reason to believe races will be 50-50, non-uniform priors, or that omitted variable bias exists, it would make sense to impose a structure on time-variation of the poll. I think this set of reasons is why it feels wrong to us when we see predictions varying so much far off from an election.
Don’t read too much into small bets.
Predictit puts Le Pen at 40% (now down to 34%), but the much larger Betfair (orig) puts her at 22%. Generally you should quote Betfair because it is larger, because it doesn’t limit individuals. The only advantage of Predictit is that it is open to Americans, but that is probably only relevant to American elections.
Even Betfair’s prices only represent a million dollars worth of betting. $20k of betting after the American election moved Le Pen up to 40%. I don’t know how long it took to correct that, but clearly faster on Betfair than on Predictit. (And I don’t know whether the market changed its mind or incorporated the new information of the center-right primary.)
Thanks for the insight on the difference between Predictit/Betfair—I wasn’t aware of this liquidity difference. Although so long as there is a reasonable amount of liquidity on Predict it, it’s very strange the two are not in equilibrium. Do you know if there are any open theories as to why this is?
One thing I notice is a lot of commenters on PredictIt are alt-right/NRx. It seems unlikely, but I wonder if different ideological priors are pushing different prediction markets away from a common equilibrium probability.
Maybe there isn’t a reasonable amount of liquidity on Predictit. It is now down to 22%, from 34% when I wrote my comment, maybe an hour ago.
Predictit has a time series, but only daily updates. Betfair has a detailed chart without labels on the time axis.
It’s just masturbation with math notation.
That doesn’t look like a reasonable starting point to me.
Going back to the OP...
Sure, but it’s very difficult to model.
No, it’s not. In a two-party system each party adjusts until it can capture close to 50% of the votes. There is a feedback loop.
I’m an arrogant git, so I accept them as bit worse :-P To quote an old expression, (historical-) data driven models are like driving while looking into a rearview mirror. Things will change. In this particular case, the Brexit vote showed that under right conditions people who do not normally vote (and so are ignored by historical-data models) will come out of the woodwork.
Eh, the existence of a “true answer” is doubtful. If you have a random variable, is each instantiation of it a “true answer”? You end up with a lot of true answers...
That’s fine actually, if you assume your forecasts are continuous in time, then they’re continuous martingales and thus equivalent to some time-changed Wiener process. (EDIT: your forecasts need not be continuous, my bad.) The problem is that he doesn’t take into the time transformation when he claims that you need to weight your signal by 1/sqrt(t).
He also has a typo in his statement of Ito’s Lemma which might affect his derivation. I’ll check his math later.