I suspect this is the biggest counter-argument for Tool AI, even bigger than all the technical concerns Eliezer made in the post. Even if we could build a safe Tool AI, somebody would soon build an agent AI anyway.
Thank you for saying this (and backing it up better than I would have). I think we should concede, however, that a similar threat applies to FAI. The arms race phenomenon may create uFAI before FAI can be ready. This strikes me as very probable. Alternately, if AI does not “foom”, uFAI might be created after FAI. (I’m mostly persuaded that it will foom, but I still think it’s useful to map the debate.) The one advantage is that if Friendly Agent AI comes first and fooms, the threat is neutralized; whereas Friendly Tool AI can only advise us how to stop reckless AI researchers. If reckless agent AIs act more rapidly than we can respond, the Tool AI won’t save us.
Alternately, if AI does not “foom”, uFAI might be created after FAI.
If uFAI doesn’t “foom” either, they both get a good chunk of expected utility. FAI doesn’t need any particular capability, it only has to be competitive with other possible things.
Thank you for saying this (and backing it up better than I would have). I think we should concede, however, that a similar threat applies to FAI. The arms race phenomenon may create uFAI before FAI can be ready. This strikes me as very probable. Alternately, if AI does not “foom”, uFAI might be created after FAI. (I’m mostly persuaded that it will foom, but I still think it’s useful to map the debate.) The one advantage is that if Friendly Agent AI comes first and fooms, the threat is neutralized; whereas Friendly Tool AI can only advise us how to stop reckless AI researchers. If reckless agent AIs act more rapidly than we can respond, the Tool AI won’t save us.
If uFAI doesn’t “foom” either, they both get a good chunk of expected utility. FAI doesn’t need any particular capability, it only has to be competitive with other possible things.