(Forecast Object) What is the object that we want to forecast? Is it a time series, such as sales of a firm recorded over time, or an event, such as devaluation of a currency, or something else? Appropriate forecasting strategies depend on the nature of the object being forecast.
(Information Set) On what information will the forecast be based? In a time series environment, for example, are we forecasting one series, several, or thousands? And what is the quantity and quality of the data? Appropriate forecasting strategies depend on the information set, broadly interpreted to not only quantitative data but also expert opinion, judgment, and accumulated wisdom.
(Model Uncertainty and Improvement) Does our forecasting model match the true GDP? Of course not. One must never, ever, be so foolish as to be lulled into such a naive belief. All models are false: they are intentional abstractions of a much more complex reality. A model might be useful for certain purposes and poor for others. Models that once worked well may stop working well. One must continually diagnose and assess both empirical performance and consistency with theory. The key is to work continuously toward model improvement.
(Forecast Horizon) What is the forecast horizon of interest, and what determines it? Are we interested, for example, in forecasting one month ahead, one year ahead, or ten years ahead (called h-step-ahead fore- casts, in this case for h = 1, h = 12 and h = 120 months)? Appropriate forecasting strategies likely vary with the horizon.
(Structural Change) Are the approximations to reality that we use for forecasting (i.e., our models) stable over time? Generally not. Things can change for a variety of reasons, gradually or abruptly, with obviously important implications for forecasting. Hence we need methods of detecting and adapting to structural change.
(Forecast Statement) How will our forecasts be stated? If, for exam- ple, the object to be forecast is a time series, are we interested in a single “best guess” forecast, a “reasonable range” of possible future values that reflects the underlying uncertainty associated with the forecasting prob- lem, or a full probability distribution of possible future values? What are the associated costs and benefits?
(Forecast Presentation) How best to present forecasts? Except in the simplest cases, like a single h-step-ahead point forecast, graphical methods are valuable, not only for forecast presentation but also for forecast construction and evaluation.
(Decision Environment and Loss Function) What is the decision environment in which the forecast will be used? In particular, what decision will the forecast guide? How do we quantify what we mean by a “good” forecast, and in particular, the cost or loss associated with forecast errors of various signs and sizes?
(Model Complexity and the Parsimony Principle) What sorts of models, in terms of complexity, tend to do best for forecasting in business, finance, economics, and government? The phenomena that we model and forecast are often tremendously complex, but it does not necessarily follow that our forecasting models should be complex. Bigger forecasting models are not necessarily better, and indeed, all else equal, smaller models are generally preferable (the “parsimony principle”).
(Unobserved Components) In the leading time case of time series, have we successfully modeled trend? Seasonality? Cycles? Some series have all such components, and some not. They are driven by very different factors, and each should be given serious attention.
3.
Question: How should I measure the long-term civilizational importance of the subject of a forecasting question?
I’ve used the Metaculus API to collect my predictions on open, closed, and resolved questions.
I would like to organize these predictions; one way I want to do this is by the “civilizational importance” of the forecasting question’s content.
Right now, I’ve thought to given subjective ratings of importance on logarithmic scale, but want a more formal system of measurement.
Another idea for each question is to give every category a score of 0 (no relevance), 1 (relevance), or 2 (relevant and important). For example, if all of my categories “Biology, Astronomy, Space_Industry, and Sports”, then the question—Will SpaceX send people to Mars by 2030? - would have this dictionary {”Biology”:0, “Space_Industry”:2, “Astronomy”:1, “Sports”:0}. I’m unsure whether this system is helpful.
Thoughts, Notes: 10/14/0012022 (2)
Contents:
Track Record, Binary, Metaculus, 10/14/0012022
Quote: Universal Considerations [Forecasting]
Question: on measuring importance of forecasting questions
Please tell me how my writing and epistemics are inadequate.
1.
My Metaculus Track Record, Binary, [06/21/0012021 − 10/14/0012022]
2.
The Universal Considerations for forecasting in Chapter 2 of Francis X. Diebold’s book Forecasting in Economics, Business, Finance and Beyond:
3.
Question: How should I measure the long-term civilizational importance of the subject of a forecasting question?
I’ve used the Metaculus API to collect my predictions on open, closed, and resolved questions.
I would like to organize these predictions; one way I want to do this is by the “civilizational importance” of the forecasting question’s content.
Right now, I’ve thought to given subjective ratings of importance on logarithmic scale, but want a more formal system of measurement.
Another idea for each question is to give every category a score of 0 (no relevance), 1 (relevance), or 2 (relevant and important). For example, if all of my categories “Biology, Astronomy, Space_Industry, and Sports”, then the question—Will SpaceX send people to Mars by 2030? - would have this dictionary {”Biology”:0, “Space_Industry”:2, “Astronomy”:1, “Sports”:0}. I’m unsure whether this system is helpful.
Does anyone have any thoughts for this?