The Evidence for Question Decomposition is Weak
Question decomposition appears to be a relatively common method for forecasting, see Allyn-Feuer & Sanders 2023, Silver 2016, Kaufman 2011 and Hanson 2011, but there have been conceptual arguments against this technique, see Yudkowsky 2017 and Gwern 2019, which both state that it reliably underestimates the probability of events.
What is the empirical evidence for decomposition?
Lawrence et al. 2006 summarize the state of the field:
Decomposition methods are designed to improve accuracy by splitting the judgmental task into a series of smaller and cognitively less demanding tasks, and then combining the resulting judgements. Armstrong (2001) distinguishes between decomposition, where the breakdown of the task is multiplicative (e.g. sales forecast=market size forecast×market share forecast), and segmentation, where it is additive (e.g. sales forecast=Northern region forecast+Western region forecast+Central region forecast), but we will use the term for both approaches here. Surprisingly, there has been relatively little research over the last 25 years into the value of decomposition and the conditions under which it is likely to improve accuracy. In only a few cases has the accuracy of forecasts resulting from decomposition been tested against those of control groups making forecasts holistically. One exception is Edmundson (1990) who found that for a time series extrapolation task, obtaining separate estimates of the trend, seasonal and random components and then combining these to obtain forecasts led to greater accuracy than could be obtained from holistic forecasts. Similarly, Webby, O’Connor and Edmundson (2005) showed that, when a time series was disturbed in some periods by several simultaneous special events, accuracy was greater when forecasters were required to make separate estimates for the effect of each event, rather than estimating the combined effects holistically. Armstrong and Collopy (1993) also constructed more accurate forecasts by structuring the selection and weighting of statistical forecasts around the judge’s knowledge of separate factors that influence the trends in time series (causal forces). Many other proposals for decomposition methods have been based on an act of faith that breaking down judgmental tasks is bound to improve accuracy or upon the fact that decomposition yields an audit trail and hence a defensible rationale for the forecasts (Abramson & Finizza, 1991; Bunn & Wright, 1991; Flores, Olson, & Wolfe, 1992; Saaty & Vargas, 1991; Salo & Bunn, 1995; Wolfe & Flores, 1990). Yet, as Goodwin and Wright (1993) point out, decomposition is not guaranteed to improve accuracy and may actually reduce it when the decomposed judgements are psychologically more complex or less familiar than holistic judgements, or where the increased number of judgements required by the decomposition induces fatigue.
(Emphasis mine).
The types of decomposition described here seem quite different from the ones used in the sources above: Decomposed time series are quite dissimilar to multiplied probabilities for binary predictions, and in combination with the conceptual counter-arguments the evidence appears quite weak.
It appears as if a team of a few (let’s say 4) dedicated forecasters could run a small experiment to determine whether multiplicative decomposition for binary forecasts a good method, by randomly spending 20 minutes either making explicitely decomposed forecasts or control forecasts (although the exact method for control needs to be elaborated on). Working in parallel, making 70 forecasts should take less than 6 hours, although it’d be useful to search for more recent literature on the question.
You could consider adding this link to your list of conceptual discussions in the first sentence, I found it helpful.
I think Tetlock and cia might have already done some related work?
Question decomposition is part of the superforecasting commandments, though I can’t recall off the top of my head if they were RCT’d individually or just as a whole.
ETA: This is the relevant paper (h/t Misha Yagudin). It was not about the 10 commandments. Apparently those haven’t been RCT’d at all?
I don’t remember anything specific from reading their stuff, but that would of course be useful. Sadly, I haven’t been able to find any more recent investigations into decomposition, e.g. Connected Papers for MacGregor 1999 gives nothing worthwhile after 2006 on a first skim, but I’ll perhaps look more at it.
“What is the empirical evidence for decomposition being a technique that improves forecasts?”
I might be misunderstanding here, but I’m fairly confident that the recent history of predicting sports outcomes and developing live betting odds very strongly supports decomposition as a technique (under some conditions).
It seems like the only rational way of predicting the outcome of a multi-stage sports event (like the FIFA World Cup, for example) is decomposing the chances of a team winning the World Cup into the chances of them winning each previous game. (And then adding a K-factor to adjust to recent results).
Maybe to clarify, by question decomposition I mean techniques such as saying ”X will happen if and only if Y1 and Y2 and Y3… all happen, so we estimate P(Y1) and P(Y2|Y1) and P(Y3|Y1,Y2) &c, and then multiply them together to estimate P(X)=P(Y1)⋅P(Y2|Y1)⋅P(Y3|Y2,Y1⋅)⋅…”, which is how it is done in the sources I linked.
Do you by chance have links about how this is done in sports betting? I’d be interested in that.
I think this is highly confounded with effort. Asking people to decompose a forecast will, on average, cause them to think more. This further calls into question any positive findings for decomposition.
I find this baffling. It seems like breaking predictions into sub-parts should help. But I haven’t thought about it much :)
One possible counter-factor is in structuring people’s judgments artificially. If asking them to break a prediction into sub-parts makes them factor the problem in different ways than they would in their own thinking, I can see how that would hurt judgments.
And it could actually cost time. Asking sub-questions could cause people to spend their cognitive time on the particulars of those sub-problems, rather than spending that time on sub-problems they thought of themselves, and that work naturally with their overall strategy for making that prediction.
This seems like a question one shouldn’t be using statistical evidence to make an opinion about. It seems tractable to just grok (and intuify) the theoretical considerations and thus gain a much better understanding of when vs when not to decompose (and with how much granularity and by which method). Deferring to statistics on it seems liable to distort the model—such that I don’t think a temporary increase in the accuracy of final-stage judgments would be worth it.