I really think the “You’re just as likely to get results in the opposite direction” argument is on the priors overstated for most forms of research. Does Scott think that work we do today is just as likely to decrease our understanding of P/NP as increase it? We may be a long way off from proving an answer but that’s not a reason to adopt such a strange prior.
I’m doing some work for MIRI looking at the historical track record of predictions of the future and actions taken based on them, and whether such attempts have systematically done as much harm as good.
To this end, among other things, I’ve been reading Nate Silver’s The Signal and the Noise. In Chapter 5, he discusses how attempts to improve earthquake predictions have consistently yielded worse predictive models than the Gutenberg-Richter law. This has slight relevance.
Such examples not withstanding, my current prior is on MIRI’s FAI research having positive expected value. I don’t think that the expected value of the research is zero or negative – only that it’s not competitive with the best of the other interventions on the table.
I’m doing some work for MIRI looking at the historical track record of predictions of the future and actions taken based on them, and whether such attempts have systematically done as much harm as good.
To this end, among other things, I’ve been reading Nate Silver’s The Signal and the Noise. In Chapter 5, he discusses how attempts to improve earthquake predictions have consistently yielded worse predictive models than the Gutenberg-Richter law. This has slight relevance.
Such examples not withstanding, my current prior is on MIRI’s FAI research having positive expected value. I don’t think that the expected value of the research is zero or negative – only that it’s not competitive with the best of the other interventions on the table.