You could test this to some extent by asking the forecasters to predict more complicated causal questions. If they lose most of their edge, then you may be right.
Speaking as a forecaster, my first instinct when someone does that is to go on Google and look up whatever information may be relevant to the question that I’ve been asked.
If I’m not allowed to do that and the subject is not one I’m expert in then I anticipate doing quite badly, but if I’m allowed a couple hours of research and the question is grounded enough I think most of the edge the expert has over me would disappear. The problem is that an expert is really someone who has done that for their whole field for thousands of hours, but often if your only goal is to answer a very specific question you can outsource most of the work away (e.g. download Sage and run some code on it to figure out the group of rational points of an elliptic curve).
So I don’t think what you said really tests my claim, since my argument for why understanding is important when you want to have impact is that in the real world it’s prohibitively expensive to set up conditional prediction markets on all the different actions you could take at a given moment every time you need to make a decision. As davidad puts it, “it’s mostly that conditional prediction markets are too atomized/uncompressed/incoherent”.
I still think your experiment would be interesting to run but depending on how it were set up I can imagine not updating on it much even if forecasters do about as well as experts.
It seems like you’re saying that the practical weakness of forecasters vs experts is their inability to make numerouscausal forecasts. Personally, I think the causal issue is the main issue, whereas you think it is that the predictions are so numerous. But they are not always numerous—sometimes you can affect big changes by intervening at a few pivot points, such as at elections. And the idea that you can avoid dealing with causal interventions by conditioning on every parent is usually not practical, because conditioning on every parent/confounder means that you have to make too many predictions, whereas you can just run one RCT.
You could test this to some extent by asking the forecasters to predict more complicated causal questions. If they lose most of their edge, then you may be right.
Speaking as a forecaster, my first instinct when someone does that is to go on Google and look up whatever information may be relevant to the question that I’ve been asked.
If I’m not allowed to do that and the subject is not one I’m expert in then I anticipate doing quite badly, but if I’m allowed a couple hours of research and the question is grounded enough I think most of the edge the expert has over me would disappear. The problem is that an expert is really someone who has done that for their whole field for thousands of hours, but often if your only goal is to answer a very specific question you can outsource most of the work away (e.g. download Sage and run some code on it to figure out the group of rational points of an elliptic curve).
So I don’t think what you said really tests my claim, since my argument for why understanding is important when you want to have impact is that in the real world it’s prohibitively expensive to set up conditional prediction markets on all the different actions you could take at a given moment every time you need to make a decision. As davidad puts it, “it’s mostly that conditional prediction markets are too atomized/uncompressed/incoherent”.
I still think your experiment would be interesting to run but depending on how it were set up I can imagine not updating on it much even if forecasters do about as well as experts.
It seems like you’re saying that the practical weakness of forecasters vs experts is their inability to make numerous causal forecasts. Personally, I think the causal issue is the main issue, whereas you think it is that the predictions are so numerous. But they are not always numerous—sometimes you can affect big changes by intervening at a few pivot points, such as at elections. And the idea that you can avoid dealing with causal interventions by conditioning on every parent is usually not practical, because conditioning on every parent/confounder means that you have to make too many predictions, whereas you can just run one RCT.