I’m not sure I actually understand the distinction between forecasting and foresight. For me, most of the problems you describe sound either like forecasting questions or AI strategy questions that rely on some forecast.
Your two arguments for why foresight is different than forecasting are a) some people think forecasting means only long-term predictions and some people think it means only short-term predictions. My understanding of forecasting is that it is not time-dependent, e.g. I can make forecasts about an hour from now or for a million years from now. This is also how I perceive the EA /AGI-risk community to use that term.
b) foresight looks at the cone of outcomes, not just one. My understanding of forecasting is that you would optimally want to predict a distribution of outcomes, i.e. the cone but weighted with probabilities. This seems strictly better than predicting the cone without probabilities since probabilities allow you to prioritize between scenarios.
I understand some of the problems you describe, e.g. that people might be missing parts of the distribution when they make predictions and they should spread them wider but I think you can describe these problems entirely within the forecasting language and there is no need to introduce a new term.
LMK if I understood the article and your concerns correctly :)
So, my difficulty is that my experience in government and my experience in EA-adjacent spaces has totally confused my understanding of the jargon. I’ll try to clarify:
In the context of my government experience, forecasting is explicitly trying to predict what will happen based on past data. It does not fully account for fundamental assumptions that might break due to advances in a field, changes in geopolitics, etc. Forecasts are typically used to inform one decision. It does not focus on being robust across potential futures or try to identify opportunities we can take to change the future.
In EA / AGI Risk, it seems that people are using “forecasting” to mean something somewhat like foresight, but not really? Like, if you go on Metaculus, they are making long-term forecasts in a superforecaster-mindset, but are perhaps expecting their long-term forecasts are as good as the short-term forecasts. I don’t mean to sound harsh, it’s useful what they are doing and can still feed into a robust plan for different scenarios. However, I’d say what is mentioned in reports typically does lean a bit more into (what I’d consider) foresight territory sometimes.
My hope: instead of only using “forecasts/foresight” to figure out when AGI will happen, we use it to identify risks for the community, potential yellow/red light signals, and golden opportunities where we can effectively implement policies/regulations. In my opinion, using a “strategic foresight” approach enables us to be a lot more prepared for different scenarios (and might even have identified a risk like SBF much sooner).
My understanding of forecasting is that you would optimally want to predict a distribution of outcomes, i.e. the cone but weighted with probabilities. This seems strictly better than predicting the cone without probabilities since probabilities allow you to prioritize between scenarios.
Yes, in the end, we still need to prioritize based on the plausibility of a scenario.
I understand some of the problems you describe, e.g. that people might be missing parts of the distribution when they make predictions and they should spread them wider but I think you can describe these problems entirely within the forecasting language and there is no need to introduce a new term.
Yeah, I care much less about the term/jargon than the approach. In other words, what I’m hoping to see more of is to come up with a set of scenarios and forecasting across the cone of plausibility (weighted by probability, impact, etc) so that we can create a robust plan and identify opportunities that improve our odds of success.
I’m not sure I actually understand the distinction between forecasting and foresight. For me, most of the problems you describe sound either like forecasting questions or AI strategy questions that rely on some forecast.
Your two arguments for why foresight is different than forecasting are
a) some people think forecasting means only long-term predictions and some people think it means only short-term predictions.
My understanding of forecasting is that it is not time-dependent, e.g. I can make forecasts about an hour from now or for a million years from now. This is also how I perceive the EA /AGI-risk community to use that term.
b) foresight looks at the cone of outcomes, not just one.
My understanding of forecasting is that you would optimally want to predict a distribution of outcomes, i.e. the cone but weighted with probabilities. This seems strictly better than predicting the cone without probabilities since probabilities allow you to prioritize between scenarios.
I understand some of the problems you describe, e.g. that people might be missing parts of the distribution when they make predictions and they should spread them wider but I think you can describe these problems entirely within the forecasting language and there is no need to introduce a new term.
LMK if I understood the article and your concerns correctly :)
So, my difficulty is that my experience in government and my experience in EA-adjacent spaces has totally confused my understanding of the jargon. I’ll try to clarify:
In the context of my government experience, forecasting is explicitly trying to predict what will happen based on past data. It does not fully account for fundamental assumptions that might break due to advances in a field, changes in geopolitics, etc. Forecasts are typically used to inform one decision. It does not focus on being robust across potential futures or try to identify opportunities we can take to change the future.
In EA / AGI Risk, it seems that people are using “forecasting” to mean something somewhat like foresight, but not really? Like, if you go on Metaculus, they are making long-term forecasts in a superforecaster-mindset, but are perhaps expecting their long-term forecasts are as good as the short-term forecasts. I don’t mean to sound harsh, it’s useful what they are doing and can still feed into a robust plan for different scenarios. However, I’d say what is mentioned in reports typically does lean a bit more into (what I’d consider) foresight territory sometimes.
My hope: instead of only using “forecasts/foresight” to figure out when AGI will happen, we use it to identify risks for the community, potential yellow/red light signals, and golden opportunities where we can effectively implement policies/regulations. In my opinion, using a “strategic foresight” approach enables us to be a lot more prepared for different scenarios (and might even have identified a risk like SBF much sooner).
Yes, in the end, we still need to prioritize based on the plausibility of a scenario.
Yeah, I care much less about the term/jargon than the approach. In other words, what I’m hoping to see more of is to come up with a set of scenarios and forecasting across the cone of plausibility (weighted by probability, impact, etc) so that we can create a robust plan and identify opportunities that improve our odds of success.
thanks for clarifying