I’d be especially interested in edge cases. Is e.g. Google’s driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a “general intelligence module” make Google’s driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
Both your examples are agents currently. A calculator is a tool.
Anyway, I’ve still got a lot more work to do before I seriously discuss this issue.
I’d be especially interested in edge cases. Is e.g. Google’s driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a “general intelligence module” make Google’s driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
A driverless car is firmly is on the agent side of the fence, by my defintions. Feel free to state your own, anybody.