Tl;dr: We are wondering whether there might be more value in forecasting as a popularized educational activity than seems to be the consensus. Looking for explicit opinions and feedback.
We are a team working on popularizing evidence- and rationality-based approaches in the Czech Republic, including forecasting. When considering and consulting our previous and upcoming projects, we came to realize that there seems to be too little attention paid to forecasting as a skill-building exercise.
In this sense, we agree with David Althaus and kokotajlod’s older post regarding the mainstreaming of forecasting. While it’s not society’s biggest issue, we identify especially with the second of their points, which can be expanded as follows:
Forecasting builds an explicit link from its participants to a ground truth which can (with varying success, depending on the context) be objectively claimed.
This link can serve as a grounding rod for rationalization efforts which can be viewed as (otherwise) value-neutral.
What’s more, forecasting principles can be used and trained across various domains to meet people wherever they are most open to learning—from geopolitical questions to very personal ones. Additionally, using probabilistic forecasting in personal questions, some important aspects of rationality can be internalized quite effectively. I point to Julia Galef’s approach of satisficing using subjective criteria and qualitative probability expressions to get beyond the initial issue of not having any predictions at all.
At the same time, this satisficing need not be a significant drawback to the benefits that could flow from a wider diffusion of forecasting. Whether it be depolarization in politics or laying the groundwork for an epistemic infrastructure so that we can more consciously build shared mental models, the investment necessary to start reaping rewards could be quite small. Most concretely, this prior is based on the results from Chang et al. - we think positive spillovers could be achieved even with modest interventions. In fact, just sticking with the previously referenced post, already types 13-16 in Julia Galef’s taxonomy could all have positive societal externalities if taken up more broadly.
Of course, the answer to the implicit question which seems to be rearing its head:
“Why aren’t we trying to make people ‘smarter’?”
is
“Many people are working on that, but it’s not exactly very easy and/or straightforward!”
which is why it’s not the question actually asked here. Rather, our point relates specifically to the signposting of forecasting tournaments as sites for skill acquisition by the average person, cultivating forecasting skill as a goal by itself. And with that, of course, the externalities above.
Work is already being done in this area in some ways—we have been, for example, in touch with the Alliance for Decision Education, who’ve just recently wrapped up their own Student Forecasting Tournament in cooperation with Good Judgement. And while we’ve also piloted some work on forecasting with middle school and high school students, it might be the case that the skill-acquisition/self-development aspect is relatively neglected for other age groups, where it might be just as, if not more,* able to promote society-wide objectives.
In our experience, self-selected forecasters are uniquely responsive to a self-development motivation.
Over the last six months, we ran an EA Infrastructure Fund supported project intended to chart a way forward on the use of judgmental forecasting in policymaking. While we are currently in the process of writing up a detailed report, we are finding one of our preliminary findings confirmed by exit interviews. Namely, (intensive) participation in the tournament was heavily influenced by whether participants believed they would build/improve their own skills by getting involved.
Of course, there are many types of forecasting projects, and it would hardly be possible or appropriate for all of them to suddenly change their presentation or user experience in line with the above. However, the obstacles seem quite small, and potential benefits sizable.
Note that we think of benefits of the kind discussed here as not necessarily at odds with what the (implicit) end-goal of (most) current forecasting platforms – raising aggregate forecast accuracy – is. In any case, extremization/weighing based on track records, an already long-existing feature on many** should ensure that even significant influxes of new forecasters do not decrease accuracy.
How might we move to better understand the social benefits of improved judgment?
Of course, any such move would require resources – leading to the question of what order of magnitude the benefits would potentially reach. For brevity’s sake, we referred to all the cognitive skills which may be trained in judgmental forecasting as “better judgment,” though of course for a proper audit this might need to be decomposed into all its substituent parts – probability calibration; accountability/track records; more prevalence on viewing beliefs as probabilistic, etc.
While there are frameworks such as OpenPhilanthropy’s “bar”, it’s not quite clear to us what the appropriate way to evaluate the potential benefits and tradeoffs of such projects would be, especially in light of the broad scope of the benefits mentioned above. However, limited informal feedback from some forecasting community members gathered thus far ranges from very skeptical to quite positive. We would therefore like to gather more opinions on the following questions:
Do you think building skills should be considered as important as accuracy in forecasting tournaments? Why or why not?
Should (some) resources explicitly be dedicated towards diffusion of forecasting into a wider population?
How can we design a framework which would allow us to compare benefits from influencing the decisions of small groups vs. benefits from upskilling large populations?
—
* This is actually one of the things we are quite uncertain about – what are the benefits of cognitively upskilling adults as compared to teenagers? Should we think more in terms of compounding gains, or put a premium on the greater autonomy and resources available to adults?
** A weak assumption here is that forecasting tournaments rather than prediction markets might be more welcoming entry points to new forecasters (especially in terms of how below average forecasts are handled). However, this also draws on our experience with running forecasting in the Czech Republic, and might not be universally valid.
Forecasting as a tool for teaching the general public to make better judgements?
Tl;dr: We are wondering whether there might be more value in forecasting as a popularized educational activity than seems to be the consensus. Looking for explicit opinions and feedback.
We are a team working on popularizing evidence- and rationality-based approaches in the Czech Republic, including forecasting. When considering and consulting our previous and upcoming projects, we came to realize that there seems to be too little attention paid to forecasting as a skill-building exercise.
In this sense, we agree with David Althaus and kokotajlod’s older post regarding the mainstreaming of forecasting. While it’s not society’s biggest issue, we identify especially with the second of their points, which can be expanded as follows:
Forecasting builds an explicit link from its participants to a ground truth which can (with varying success, depending on the context) be objectively claimed.
This link can serve as a grounding rod for rationalization efforts which can be viewed as (otherwise) value-neutral.
What’s more, forecasting principles can be used and trained across various domains to meet people wherever they are most open to learning—from geopolitical questions to very personal ones. Additionally, using probabilistic forecasting in personal questions, some important aspects of rationality can be internalized quite effectively. I point to Julia Galef’s approach of satisficing using subjective criteria and qualitative probability expressions to get beyond the initial issue of not having any predictions at all.
At the same time, this satisficing need not be a significant drawback to the benefits that could flow from a wider diffusion of forecasting. Whether it be depolarization in politics or laying the groundwork for an epistemic infrastructure so that we can more consciously build shared mental models, the investment necessary to start reaping rewards could be quite small. Most concretely, this prior is based on the results from Chang et al. - we think positive spillovers could be achieved even with modest interventions. In fact, just sticking with the previously referenced post, already types 13-16 in Julia Galef’s taxonomy could all have positive societal externalities if taken up more broadly.
Of course, the answer to the implicit question which seems to be rearing its head:
“Why aren’t we trying to make people ‘smarter’?”
is
“Many people are working on that, but it’s not exactly very easy and/or straightforward!”
which is why it’s not the question actually asked here. Rather, our point relates specifically to the signposting of forecasting tournaments as sites for skill acquisition by the average person, cultivating forecasting skill as a goal by itself. And with that, of course, the externalities above.
Work is already being done in this area in some ways—we have been, for example, in touch with the Alliance for Decision Education, who’ve just recently wrapped up their own Student Forecasting Tournament in cooperation with Good Judgement. And while we’ve also piloted some work on forecasting with middle school and high school students, it might be the case that the skill-acquisition/self-development aspect is relatively neglected for other age groups, where it might be just as, if not more,* able to promote society-wide objectives.
In our experience, self-selected forecasters are uniquely responsive to a self-development motivation.
Over the last six months, we ran an EA Infrastructure Fund supported project intended to chart a way forward on the use of judgmental forecasting in policymaking. While we are currently in the process of writing up a detailed report, we are finding one of our preliminary findings confirmed by exit interviews. Namely, (intensive) participation in the tournament was heavily influenced by whether participants believed they would build/improve their own skills by getting involved.
Of course, there are many types of forecasting projects, and it would hardly be possible or appropriate for all of them to suddenly change their presentation or user experience in line with the above. However, the obstacles seem quite small, and potential benefits sizable.
Note that we think of benefits of the kind discussed here as not necessarily at odds with what the (implicit) end-goal of (most) current forecasting platforms – raising aggregate forecast accuracy – is. In any case, extremization/weighing based on track records, an already long-existing feature on many** should ensure that even significant influxes of new forecasters do not decrease accuracy.
How might we move to better understand the social benefits of improved judgment?
Of course, any such move would require resources – leading to the question of what order of magnitude the benefits would potentially reach. For brevity’s sake, we referred to all the cognitive skills which may be trained in judgmental forecasting as “better judgment,” though of course for a proper audit this might need to be decomposed into all its substituent parts – probability calibration; accountability/track records; more prevalence on viewing beliefs as probabilistic, etc.
While there are frameworks such as OpenPhilanthropy’s “bar”, it’s not quite clear to us what the appropriate way to evaluate the potential benefits and tradeoffs of such projects would be, especially in light of the broad scope of the benefits mentioned above. However, limited informal feedback from some forecasting community members gathered thus far ranges from very skeptical to quite positive. We would therefore like to gather more opinions on the following questions:
Do you think building skills should be considered as important as accuracy in forecasting tournaments? Why or why not?
Should (some) resources explicitly be dedicated towards diffusion of forecasting into a wider population?
How can we design a framework which would allow us to compare benefits from influencing the decisions of small groups vs. benefits from upskilling large populations?
—
* This is actually one of the things we are quite uncertain about – what are the benefits of cognitively upskilling adults as compared to teenagers? Should we think more in terms of compounding gains, or put a premium on the greater autonomy and resources available to adults?
** A weak assumption here is that forecasting tournaments rather than prediction markets might be more welcoming entry points to new forecasters (especially in terms of how below average forecasts are handled). However, this also draws on our experience with running forecasting in the Czech Republic, and might not be universally valid.