I’m suspecting “tool” versus “agent” is a magical category whose use is really talking about the person using it.
I think the concepts are clear at the extremes, but they tend to get muddled in the middle.
Do you believe that humans are agents? If so, what would you have to do to a human brain in order to turn a human into the other extreme, a clear tool?
I could ask the same about C. elegans. If C. elegans is not an agent, why not? If it is, then what would have to change in order for it to become a tool?
And if these distinctions don’t make sense for humans or C. elegans, then why do you expect them to make sense for future AI systems?
I’d be especially interested in edge cases. Is e.g. Google’s driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a “general intelligence module” make Google’s driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
Do you believe that humans are agents? If so, what would you have to do to a human brain in order to turn a human into the other extreme, a clear tool?
I could ask the same about C. elegans. If C. elegans is not an agent, why not? If it is, then what would have to change in order for it to become a tool?
And if these distinctions don’t make sense for humans or C. elegans, then why do you expect them to make sense for future AI systems?
A cat’s an agent. It has goals it works towards. I’ve seen cats manifest creativity that surprised me.
Why is that surprising? Does anyone think that “agent” implies human level intelligence?
Both your examples are agents currently. A calculator is a tool.
Anyway, I’ve still got a lot more work to do before I seriously discuss this issue.
I’d be especially interested in edge cases. Is e.g. Google’s driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a “general intelligence module” make Google’s driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
A driverless car is firmly is on the agent side of the fence, by my defintions. Feel free to state your own, anybody.