The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it’s the best available means to make such an inference.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It’s also used by earth scientists, but I haven’t seen it used elsewhere. Based on this approach, analysts can:
make a prediction about outcomes without interventions in libya with and without intervention
when they choose to intervene on non-intervene, calculate those outcomes
over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch
under profound uncertainty about what would have happened if the alternative decision had been made?
I’m not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it’s the best available means to make such an inference.
Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It’s also used by earth scientists, but I haven’t seen it used elsewhere. Based on this approach, analysts can:
make a prediction about outcomes without interventions in libya with and without intervention
when they choose to intervene on non-intervene, calculate those outcomes
over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch
I’m not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.