That is not an argument against investigating Holden’s idea. It is an explanation of why SIAI had not investigated Holden’s idea before Holden had presented it (you can tell because it’s in the section titled “Why we haven’t already discussed Holden’s suggestion”). This explanation was given in response Holden presenting the idea in the course of criticizing SIAI for not having investigated it.
It’s still irrelevant. Other researchers did not find FAI to be an obvious approach, either. Holden is suggesting that SIAI could investigate tool AI as a possible safer approach. Eliezer’s response would make sense only if Holden had been suggesting SIAI should investigate the dangers of tool AI, in order to warn people against it—which is not what Holden was doing.
That is not an argument against investigating Holden’s idea. It is an explanation of why SIAI had not investigated Holden’s idea before Holden had presented it (you can tell because it’s in the section titled “Why we haven’t already discussed Holden’s suggestion”). This explanation was given in response Holden presenting the idea in the course of criticizing SIAI for not having investigated it.
It’s still irrelevant. Other researchers did not find FAI to be an obvious approach, either. Holden is suggesting that SIAI could investigate tool AI as a possible safer approach. Eliezer’s response would make sense only if Holden had been suggesting SIAI should investigate the dangers of tool AI, in order to warn people against it—which is not what Holden was doing.