I might disagree with your other points. There’s the idea that forecasting is only valuable if it’s decision-relevant, or action-guiding, and so far no forecasting org has solved this problem. But I think this is the wrong bar to beat. Making something action-guiding is really hard—and lots of things which we do think of as important don’t meet this bar.
For example, think of research. Most people at e.g. FHI don’t set out to write documents that will change how Bostrom takes decisions. Rather, they seek out something that they’re curious about, or that seems interesting, or just generally important… and mostly just try to have true beliefs, more than having impactful actions. They’re doing research, not decisions.
Most people think essays are important and enabling people to do essays better has high impact. But if you pick a random curated LW post and ask what decision was improved as a result, I think you’ll be disappointed (though not as disappointed as with forecasting questions). And this is fine. Decision-making takes in a large number of inputs, considerations, emotions, etc. which influence it in strange, non-linear ways. Its mostly just a fact about human decision-making being complex, rather than a fact about essays being useless.
So I’m thinking that the evidence that should suggest to us that forecasting is valuable is not hearing an impactful person say “I saw forecast X which caused me to change decision Y”, but rather “I saw forecast X which changed my mind about topic Y”. Then, downstream, there might be all sorts of actions which changed as a result, and the forecast-induced mind-change might be one out of a hundred counterfactually important inputs. Yet we shouldn’t propose an isolated demand for rigor that forecasting do the credit assignment problem any better than the other 99 inputs.
This seems true that there’s a lot of way to utilize forecasts. In general forecasting tends to have an implicit and unstated connection to the decision making process—I think that has to do w/ the nature of operationalization (“a forecast needs to be on a very specific thing”) and because much of the popular literature on forecasting has come from business literature (e.g. How to Measure Anything).
That being said I think action-guidingness is still the correct bar to meet for evaluating the effect it has on the EA community. I would bite the bullet and say blogs should also be held to this standard, as should research literature. An important question for an EA blog—say, LW :) - is what positive decisions it’s creating (yes there are many other good things about having a central hub, but if the quality of intellectual content is part of it that should be trackable).
If in aggregate many forecasts can produce the same type of guidance or better as many good blog posts, that would be really positive.
I wonder whether you have any examples, or concrete case studies, of things that were successfully action-guiding to people/organisations? (Beyond forecasts and blog-posts, though those are fine to.)
From a 2 min brainstorm of “info products” I’d expect to be action guiding:
Metrics and dashboards reflecting the current state of the organization.
Vision statements (“what do we as an organization do and thus what things should we consider as part of our strategy”)
Trusted advisors
Market forces (e.g. price’s of goods)
One concrete example is from when I worked in a business intelligence role. What executives wanted was extremely trustworthy reliable data sources to track business performance over time. In a software environment (e.g. all the analytic companies constantly posting to Hacker News) that’s trivial, but in a non-software environment that’s very hard. It was very action-guiding to be able to see if your last initiative worked, because if it did you could put a lot more money into it and scale it up.
I agree with you about the non-decision value of forecasting. My claim is that the decision value of forecasting is neglected, rather than that decisions are the only value. I strongly feel that neglecting the decisions aspect is leaving money on the table. From Ozzie:
My impression is that some groups have found it useful and a lot of businesses don’t know what to do with those numbers. They get a number like 87% and they don’t have ways to directly make that interact with the rest of their system.
I will make a stronger claim and say that the decisions aspect is the highestvalue aspect of forecasting. From the megaproject management example: Bent Flyvbjerg (of Reference Class Forecasting fame) estimates that megaprojects account for ~8% of global GDP. The time and budget overruns cause huge amounts of waste, and eyeballing his budget overrun numbers it looks to me like ~3% of global GDP is waste. I expect the majority of that can be resolved with good forecasting; by comparison with modelling of a different system which tries to address some of the same problems, I’d say 2⁄3 of that waste.
So I currently expect that if good forecasting became the norm only in projects of $1B or more, excluding national defense, it would conservatively be worth ~2% of global GDP.
Looking at the war example, we can consider a single catastrophic decision: disbanding the Iraqi military. I expect reasonable forecasting practices would have suggested that when you stop paying a lot of people who are in possession of virtually all of the weaponry, that they would have to find other ways to get by. Selling the weapons and their fighting skills, for example. This decision allowed an insurgency to unfold into a full-blown civil war, costing some 10^5 lives and 10^6 displaced people and moderately intense infrastructure damage.
Returning to the business example from the write-up, if one or more projects were to succeed in delivering this kind of value, I expect a lot more resources would be available for the pursuing true-beliefs-aspect of forecasting. I go as far as to say it would be a very strong inducement for people who do not currently care about having true beliefs to start doing so, in the most basic big pile of utility sense.
Glad that you found the write-up useful!
I might disagree with your other points. There’s the idea that forecasting is only valuable if it’s decision-relevant, or action-guiding, and so far no forecasting org has solved this problem. But I think this is the wrong bar to beat. Making something action-guiding is really hard—and lots of things which we do think of as important don’t meet this bar.
For example, think of research. Most people at e.g. FHI don’t set out to write documents that will change how Bostrom takes decisions. Rather, they seek out something that they’re curious about, or that seems interesting, or just generally important… and mostly just try to have true beliefs, more than having impactful actions. They’re doing research, not decisions.
Most people think essays are important and enabling people to do essays better has high impact. But if you pick a random curated LW post and ask what decision was improved as a result, I think you’ll be disappointed (though not as disappointed as with forecasting questions). And this is fine. Decision-making takes in a large number of inputs, considerations, emotions, etc. which influence it in strange, non-linear ways. Its mostly just a fact about human decision-making being complex, rather than a fact about essays being useless.
So I’m thinking that the evidence that should suggest to us that forecasting is valuable is not hearing an impactful person say “I saw forecast X which caused me to change decision Y”, but rather “I saw forecast X which changed my mind about topic Y”. Then, downstream, there might be all sorts of actions which changed as a result, and the forecast-induced mind-change might be one out of a hundred counterfactually important inputs. Yet we shouldn’t propose an isolated demand for rigor that forecasting do the credit assignment problem any better than the other 99 inputs.
This seems true that there’s a lot of way to utilize forecasts. In general forecasting tends to have an implicit and unstated connection to the decision making process—I think that has to do w/ the nature of operationalization (“a forecast needs to be on a very specific thing”) and because much of the popular literature on forecasting has come from business literature (e.g. How to Measure Anything).
That being said I think action-guidingness is still the correct bar to meet for evaluating the effect it has on the EA community. I would bite the bullet and say blogs should also be held to this standard, as should research literature. An important question for an EA blog—say, LW :) - is what positive decisions it’s creating (yes there are many other good things about having a central hub, but if the quality of intellectual content is part of it that should be trackable).
If in aggregate many forecasts can produce the same type of guidance or better as many good blog posts, that would be really positive.
I wonder whether you have any examples, or concrete case studies, of things that were successfully action-guiding to people/organisations? (Beyond forecasts and blog-posts, though those are fine to.)
From a 2 min brainstorm of “info products” I’d expect to be action guiding:
Metrics and dashboards reflecting the current state of the organization.
Vision statements (“what do we as an organization do and thus what things should we consider as part of our strategy”)
Trusted advisors
Market forces (e.g. price’s of goods)
One concrete example is from when I worked in a business intelligence role. What executives wanted was extremely trustworthy reliable data sources to track business performance over time. In a software environment (e.g. all the analytic companies constantly posting to Hacker News) that’s trivial, but in a non-software environment that’s very hard. It was very action-guiding to be able to see if your last initiative worked, because if it did you could put a lot more money into it and scale it up.
I agree with you about the non-decision value of forecasting. My claim is that the decision value of forecasting is neglected, rather than that decisions are the only value. I strongly feel that neglecting the decisions aspect is leaving money on the table. From Ozzie:
I will make a stronger claim and say that the decisions aspect is the highest value aspect of forecasting. From the megaproject management example: Bent Flyvbjerg (of Reference Class Forecasting fame) estimates that megaprojects account for ~8% of global GDP. The time and budget overruns cause huge amounts of waste, and eyeballing his budget overrun numbers it looks to me like ~3% of global GDP is waste. I expect the majority of that can be resolved with good forecasting; by comparison with modelling of a different system which tries to address some of the same problems, I’d say 2⁄3 of that waste.
So I currently expect that if good forecasting became the norm only in projects of $1B or more, excluding national defense, it would conservatively be worth ~2% of global GDP.
Looking at the war example, we can consider a single catastrophic decision: disbanding the Iraqi military. I expect reasonable forecasting practices would have suggested that when you stop paying a lot of people who are in possession of virtually all of the weaponry, that they would have to find other ways to get by. Selling the weapons and their fighting skills, for example. This decision allowed an insurgency to unfold into a full-blown civil war, costing some 10^5 lives and 10^6 displaced people and moderately intense infrastructure damage.
Returning to the business example from the write-up, if one or more projects were to succeed in delivering this kind of value, I expect a lot more resources would be available for the pursuing true-beliefs-aspect of forecasting. I go as far as to say it would be a very strong inducement for people who do not currently care about having true beliefs to start doing so, in the most basic big pile of utility sense.