That’s pretty-much what I meant: machine intelligence as a correctly-functioning tool—rather than as an out-of-control system.
Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI’s idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.
Seems to me that you simply refuse to see an AI as an agent. If AI and a human conquer the world, the only possible interpretation is that the human used the AI, never that the AI used the human. Even if it was all the AI’s idea; it just means that the human used the AI as an idea generator. Even if the AI kills the human afterwards; it would just mean that the human has used the AI incorrectly and thus killed themselves.
Am I right about this?
Er, no—I consider machines to be agents.