I suspect this is the biggest counter-argument for Tool AI, even bigger than all the technical concerns Eliezer made in the post. Even if we could build a safe Tool AI, somebody would soon build an agent AI anyway.
But assuming that we could build a safe Tool AI, we could use it to build an safer agent AI than one would otherwise be able to build. This is related to Holden’s point:
One possible scenario is that at some point, we develop powerful enough non-AGI tools (particularly specialized AIs) that we vastly improve our abilities to consider and prepare for the eventuality of AGI—to the point where any previous theory developed on the subject becomes useless.
But assuming that we could build a safe Tool AI, we could use it to build an safer agent AI than one would otherwise be able to build. This is related to Holden’s point: