Below is a list of features that make a report on some research question more helpful to me, along with a list of examples.
I wrote this post for the benefit of individuals and organizations from whom I might commission reports on specific research questions, but others might find it useful as well. Much of what’s below is probably true for Open Philanthropy in general, but I’ve written it in my own voice so that I don’t need to try to represent Open Philanthropy as a whole.
For many projects, some of the features below are not applicable, or not feasible, or (most often) not worth the cost, especially time-cost. But if present, these features make a report more helpful and action-informing to me:
The strongest forms of evidence available on the question were generated/collected. This is central but often highly constrained, e.g. we generally can’t run randomized trials in geopolitics, and major companies won’t share much proprietary data. But before commissioning a report, I’d typically want to know what the strongest evidence that could in theory be collected is, and how much that might cost to gather or produce.
Strong reasoning transparency throughout, of this particular type. In most cases this might be the most important feature I’m looking for, especially given that many research questions don’t lend themselves to more than 1-3 types of evidence anyway, and all of them are weak. In many cases, especially when I don’t have much prior context and trust built up with the producers of a report, I would like to pay for a report to be pretty “extreme” about reasoning transparency, e.g. possibly:
a footnote or endnote indicating what kind of support nearly every substantive claim has, including lengthy blockquotes of the relevant passages from primary sources (as in a GiveWell intervention report[1]).
explicit probabilities (from authors, experts, or superforecasters) provided for dozens or hundreds of claims and forecasts throughout the report, to indicate degrees of confidence. (Most people don’t have experience giving plausibly-calibrated explicit probabilities for claims, but I’ll often be willing to provide funding for explicit probabilities about some of a report’s claims to be provided by companies that specialize in doing that, e.g. Good Judgment, Metaculus, or Hypermind.)
Lots of appendices that lay out more detailed reasoning and evidence for claims that are argued more briefly in the main text of the report, a la my animal consciousness report, which is 83% appendices and endnotes (by word count).
Authors and other major contributors who have undergone special training in calibration and forecasting,[2] e.g. from Hubbard and Good Judgment. This should help contributors to a report to “speak our language” of calibrated probabilities and general Bayesianism, and perhaps improve the accuracy/calibration of the claims in the report itself. I’m typically happy to pay for this training for people working on a project I’ve commissioned.
External reviews of the ~final report, including possibly from experts with different relevant specializations and differing/opposed object-level views. This should be fairly straightforward with sufficient honoraria for reviewers, and sufficient time spent identifying appropriate experts.
Some of the strongest examples of ideal reports of this type that I’ve seen are:
GiveWell’s intervention/program reports[3] and top charity reviews.[4]
Various Open Philanthropy reports that are more speculative and/or less thorough than e.g. Roodman’s reports, for example our “medium-depth” cause reports here, some reports speculating about the future of AI (e.g. 1, 2, 3, 4, 5), and my report speculating about animal consciousness.
Of course, training in statistics, causal inference, critical evidence assessment, and the specific domains relevant to the report are also important, but that training is more difficult to acquire rapidly.
Features that make a report especially helpful to me
Cross-post from EA Forum, follow-up to EA needs consultancies.
Below is a list of features that make a report on some research question more helpful to me, along with a list of examples.
I wrote this post for the benefit of individuals and organizations from whom I might commission reports on specific research questions, but others might find it useful as well. Much of what’s below is probably true for Open Philanthropy in general, but I’ve written it in my own voice so that I don’t need to try to represent Open Philanthropy as a whole.
For many projects, some of the features below are not applicable, or not feasible, or (most often) not worth the cost, especially time-cost. But if present, these features make a report more helpful and action-informing to me:
The strongest forms of evidence available on the question were generated/collected. This is central but often highly constrained, e.g. we generally can’t run randomized trials in geopolitics, and major companies won’t share much proprietary data. But before commissioning a report, I’d typically want to know what the strongest evidence that could in theory be collected is, and how much that might cost to gather or produce.
Thoughtful cost-benefit analysis, where relevant.
Strong reasoning transparency throughout, of this particular type. In most cases this might be the most important feature I’m looking for, especially given that many research questions don’t lend themselves to more than 1-3 types of evidence anyway, and all of them are weak. In many cases, especially when I don’t have much prior context and trust built up with the producers of a report, I would like to pay for a report to be pretty “extreme” about reasoning transparency, e.g. possibly:
a footnote or endnote indicating what kind of support nearly every substantive claim has, including lengthy blockquotes of the relevant passages from primary sources (as in a GiveWell intervention report[1]).
explicit probabilities (from authors, experts, or superforecasters) provided for dozens or hundreds of claims and forecasts throughout the report, to indicate degrees of confidence. (Most people don’t have experience giving plausibly-calibrated explicit probabilities for claims, but I’ll often be willing to provide funding for explicit probabilities about some of a report’s claims to be provided by companies that specialize in doing that, e.g. Good Judgment, Metaculus, or Hypermind.)
Lots of appendices that lay out more detailed reasoning and evidence for claims that are argued more briefly in the main text of the report, a la my animal consciousness report, which is 83% appendices and endnotes (by word count).
Authors and other major contributors who have undergone special training in calibration and forecasting,[2] e.g. from Hubbard and Good Judgment. This should help contributors to a report to “speak our language” of calibrated probabilities and general Bayesianism, and perhaps improve the accuracy/calibration of the claims in the report itself. I’m typically happy to pay for this training for people working on a project I’ve commissioned.
External reviews of the ~final report, including possibly from experts with different relevant specializations and differing/opposed object-level views. This should be fairly straightforward with sufficient honoraria for reviewers, and sufficient time spent identifying appropriate experts.
Some of the strongest examples of ideal reports of this type that I’ve seen are:
GiveWell’s intervention/program reports[3] and top charity reviews.[4]
David Roodman’s evidence reviews, e.g. on microfinance, alcohol taxes, and the effects of incarceration on crime (most of these were written for Open Philanthropy).
Other examples include:
The most-developed topic articles by Our World in Data, e.g. on COVID-19, population growth, life expectancy, and extreme poverty.
Various Open Philanthropy reports that are more speculative and/or less thorough than e.g. Roodman’s reports, for example our “medium-depth” cause reports here, some reports speculating about the future of AI (e.g. 1, 2, 3, 4, 5), and my report speculating about animal consciousness.
A few reports by Rethink Priorities, e.g. on agricultural land redistribution, charter cities, lead exposure, and risks from nuclear weapons.
Some of the systematic reviews by Cochrane and Campbell.
Notes
E.g. see the footnotes at the bottom of this page.
Of course, training in statistics, causal inference, critical evidence assessment, and the specific domains relevant to the report are also important, but that training is more difficult to acquire rapidly.
Click the “(More)” links in the “Description of program” column here.
Click the “Full Research Report” links here.