Yes, you’re right that it’s not possible to measure everything, so sampling is often used in lieu of direct measurement. I had mentioned sampling in my earlier post.
consensus forecasts are not notably biased or inefficient.
The “efficiency” criterion is more difficult to define, but here it means roughly “makes use of all the available information”—sort of synonymous with rationality.
The meanings of the terms are of course up for debate, and the different papers don’t quite agree on the right meaning.
In cases where they miss the mark, this can usually be attributed to issues of insufficient information or shocks to the economy.
It’s certainly a flaw that they can’t predict shocks, but to the extent that a few shocks explain most forecasting error, that would have different implications than if the forecasts were wrong in all sorts of small ways.
The “insufficient information” refers to the quality of existing data they have access to. In some cases, people made wrong forecasts because the data about current indicator values that they were working with had errors, or was incomplete (e.g., they didn’t have information on a particular indicator value for a particular month).
The “efficiency” criterion is more difficult to define, but here it means roughly “makes use of all the available information”
How do you know? Or, more explicitly, on the basis of which evidence are you willing to make the claim that consensus macro forecasts “make use of all the available information”?
Besides, just having information is necessary but not sufficient. You also need models which will take this information as inputs and will output the forecasts. These models can easily be wrong. Is the correctness of models used included in your definition of efficiency?
It is difficult to conclusively demonstrate efficiency, but it is easy to rule out specific ways that forecasts could be inefficient. That’s what the papers do.
Yes, you’re right that it’s not possible to measure everything, so sampling is often used in lieu of direct measurement. I had mentioned sampling in my earlier post.
I’m using the same definitions as used in the literature. The “bias” concept is discussed in the cited papers, plus in my earlier post http://lesswrong.com/lw/k2a/the_usefulness_of_forecasts_and_the_rationality/
The “efficiency” criterion is more difficult to define, but here it means roughly “makes use of all the available information”—sort of synonymous with rationality.
The meanings of the terms are of course up for debate, and the different papers don’t quite agree on the right meaning.
It’s certainly a flaw that they can’t predict shocks, but to the extent that a few shocks explain most forecasting error, that would have different implications than if the forecasts were wrong in all sorts of small ways.
The “insufficient information” refers to the quality of existing data they have access to. In some cases, people made wrong forecasts because the data about current indicator values that they were working with had errors, or was incomplete (e.g., they didn’t have information on a particular indicator value for a particular month).
How do you know? Or, more explicitly, on the basis of which evidence are you willing to make the claim that consensus macro forecasts “make use of all the available information”?
Besides, just having information is necessary but not sufficient. You also need models which will take this information as inputs and will output the forecasts. These models can easily be wrong. Is the correctness of models used included in your definition of efficiency?
It is difficult to conclusively demonstrate efficiency, but it is easy to rule out specific ways that forecasts could be inefficient. That’s what the papers do.