It seems to me like most of the incentives for supply-side distortion (me over-promoting my pet x-risk) are from personal affiliation and switching costs. Consider someone who spends a decade studying climate change and alternative energy and so on, and then comes across an analysis that suggests that this was wasted effort (possibly something like the Copenhagen Consensus report that explicitly considers many different options, or an argument that AI is the most important risk facing humans, or so on). If it were the case that any company could freely switch from working on x-risk A to working on x-risk B, then it would be better to switch to the most important x-risk than to distort the importance of your x-risk.
I agree with you that demand-side agencies (funders of x-risk research, the x-risk commentariat, etc.) need to care strongly about seeing clearly, even in the presence of distortion. It’s hard to set up the appropriate incentives since most of the tools we use to reduce distortion (like tight feedback loops) are inconsistent with x-risk as a field (we have a very sparse and uninformative feedback channel). I suspect progress here looks like better modes of communication and argumentation, such that distortions are easier to spot and discourage.
It seems to me like most of the incentives for supply-side distortion (me over-promoting my pet x-risk) are from personal affiliation and switching costs. Consider someone who spends a decade studying climate change and alternative energy and so on, and then comes across an analysis that suggests that this was wasted effort (possibly something like the Copenhagen Consensus report that explicitly considers many different options, or an argument that AI is the most important risk facing humans, or so on). If it were the case that any company could freely switch from working on x-risk A to working on x-risk B, then it would be better to switch to the most important x-risk than to distort the importance of your x-risk.
I agree with you that demand-side agencies (funders of x-risk research, the x-risk commentariat, etc.) need to care strongly about seeing clearly, even in the presence of distortion. It’s hard to set up the appropriate incentives since most of the tools we use to reduce distortion (like tight feedback loops) are inconsistent with x-risk as a field (we have a very sparse and uninformative feedback channel). I suspect progress here looks like better modes of communication and argumentation, such that distortions are easier to spot and discourage.