It seems to me that Pei’s point is that it is too early to know whether FAI is possible, and further research is needed to determine this. There may come a day when the only prudent course of action is to stop the AI research cold in the name of survival of the human race, but today is not that day.
It seems to me that Pei’s point is that it is too early to know whether FAI is possible, and further research is needed to determine this
Agreed. But he never engages with the idea of pointing the field in a different direction, of prioritising certain types of research. He concludes that the field is fine exactly as it is now, and that the researchers should all be left alone.
I think we can detect a strong hint of status quo bias here. Not enough to dismiss his points, but enough to question them. If he’d concluded “we need less safety research than now” or something, I’d have respected his opinion more.
It seems to me that Pei’s point is that it is too early to know whether FAI is possible, and further research is needed to determine this. There may come a day when the only prudent course of action is to stop the AI research cold in the name of survival of the human race, but today is not that day.
Agreed. But he never engages with the idea of pointing the field in a different direction, of prioritising certain types of research. He concludes that the field is fine exactly as it is now, and that the researchers should all be left alone.
I think we can detect a strong hint of status quo bias here. Not enough to dismiss his points, but enough to question them. If he’d concluded “we need less safety research than now” or something, I’d have respected his opinion more.
Why would he? According to his views, the field is still in its infancy, stifling its natural development in any way would be a bad idea.