Given enough financial resources to actually endow research chairs and make a credible commitment to researchers, and given good enough researchers, I’d definitely focus SIAI more directly on FAI.
I totally understand holding off on hiring research faculty until having more funding, but what would the researchers hypothetically do in the presence of such funding? Does anyone have any ideas for how to do Friendly AI research?
I think (but am not sure) that I would give top priority to FAI if I had the impression that there are viable paths for research that have yet to be explored (that are systematically more likely reduce x-risk than to increase x-risk), but I haven’t seen a clear argument that this is the case.
Given enough financial resources to actually endow research chairs and make a credible commitment to researchers, and given good enough researchers, I’d definitely focus SIAI more directly on FAI.
I totally understand holding off on hiring research faculty until having more funding, but what would the researchers hypothetically do in the presence of such funding? Does anyone have any ideas for how to do Friendly AI research?
I think (but am not sure) that I would give top priority to FAI if I had the impression that there are viable paths for research that have yet to be explored (that are systematically more likely reduce x-risk than to increase x-risk), but I haven’t seen a clear argument that this is the case.