Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction
But who does the evaluation? It seems that it’s better to let specialists think about whether a given cause is important, and they need funding just to get that running. This argues for ensuring minimum funding of organizations that research important uncertainties, even the ones where your intuitive judgment says probably lead nowhere. Just as most people shouldn’t themselves research FAI, and instead fund its research, similarly most people shouldn’t research feasibility of research of FAI, and instead fund the research of that feasibility.
I think you claim too much. If I decided I couldn’t follow the relevant arguments, and wanted to trust a group to research the important uncertainties of existential risk, I’d trust FHI. (They could always decide to fund or partner with SIAI themselves if its optimality became clear.)
My only worry about funding FHI exclusively is that they are primarily philosophical and academic. I’d worry that the default thing they would do with more money would be to produce more philosophical papers. Rather than say doing/funding biological research or programming, if that was what was needed.
But as the incentive structures for x-risk reduction organisations go, those of an academic philosophy department aren’t too bad at this stage.
This seems to work as an argument for much greater marginal worth of ensuring minimal funding, so that there is at least someone who researches the uncertainties professionally (to improve on what people from the street can estimate in their spare time), before we ask about the value of researching the stuff these uncertainties are about. (Of course, being the same organization that directly benefits from a given answer is generally a bad idea, so FHI might work in this case.)
But who does the evaluation? It seems that it’s better to let specialists think about whether a given cause is important, and they need funding just to get that running. This argues for ensuring minimum funding of organizations that research important uncertainties, even the ones where your intuitive judgment says probably lead nowhere. Just as most people shouldn’t themselves research FAI, and instead fund its research, similarly most people shouldn’t research feasibility of research of FAI, and instead fund the research of that feasibility.
I think you claim too much. If I decided I couldn’t follow the relevant arguments, and wanted to trust a group to research the important uncertainties of existential risk, I’d trust FHI. (They could always decide to fund or partner with SIAI themselves if its optimality became clear.)
My only worry about funding FHI exclusively is that they are primarily philosophical and academic. I’d worry that the default thing they would do with more money would be to produce more philosophical papers. Rather than say doing/funding biological research or programming, if that was what was needed.
But as the incentive structures for x-risk reduction organisations go, those of an academic philosophy department aren’t too bad at this stage.
This seems to work as an argument for much greater marginal worth of ensuring minimal funding, so that there is at least someone who researches the uncertainties professionally (to improve on what people from the street can estimate in their spare time), before we ask about the value of researching the stuff these uncertainties are about. (Of course, being the same organization that directly benefits from a given answer is generally a bad idea, so FHI might work in this case.)
The FHI’s pitch: http://www.fhi.ox.ac.uk/get_involved