Answer: clearly, no. If you know all the ways things can go wrong, but don’t know how to make them go right, then your knowledge is useless for anything except worrying.
Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
Our knowledge of how to make things go right is not zero
My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I’ve gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI’s approach doesn’t really acknowledge the two as separate.
Answer: clearly, no. If you know all the ways things can go wrong, but don’t know how to make them go right, then your knowledge is useless for anything except worrying.
Thanks for comment. I will reply as follows:
Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
Our knowledge of how to make things go right is not zero
My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I’ve gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI’s approach doesn’t really acknowledge the two as separate.
May I recommend the concept of risk management to you? It’s very useful.
It’s generally easier to gain the knowledge of how to make things go right when your research is anchored by potential problems.