It’s not clear in what way powerful humans/narrow AI teams “make SIAI’s work moot”. Controlling the world doesn’t give insight about what to do with it, or guard from fatal mistakes.
I think Holden is making the point that the work SIAI is trying to do (i.e. sort out all the issues of how to make FAI) might be so much easier to do in the future with the help of advanced narrow AI that it’s not really worth investing a lot into trying to do it now.
Note: for anyone else who’d been wondering about Eliezer’s position on Oracle AI, see here.
I think Holden is making the point that the work SIAI is trying to do (i.e. sort out all the issues of how to make FAI) might be so much easier to do in the future with the help of advanced narrow AI that it’s not really worth investing a lot into trying to do it now.
Note: for anyone else who’d been wondering about Eliezer’s position on Oracle AI, see here.