You can define a tool as not-an-agent. Then something that can design a calculator is a tool, providing it dies nothing unless told to.
The problem with such definition is that is doesn’t tell you much about how to build system with this property. It seems to me that it’s a good-old corrigibility problem.
If you want one shot corrigibility, you have it, in LLMs. If you want some other kind of corrigibility, that’s not how tool AI is defined.
You can define a tool as not-an-agent. Then something that can design a calculator is a tool, providing it dies nothing unless told to.
The problem with such definition is that is doesn’t tell you much about how to build system with this property. It seems to me that it’s a good-old corrigibility problem.
If you want one shot corrigibility, you have it, in LLMs. If you want some other kind of corrigibility, that’s not how tool AI is defined.