I’m not convinced that human+tool is not comparable to an AGI. In fact I don’t understand why you think that. If we created tool AI that could be slotted directly into our brain as an extension of our pre-existing intelligence, wouldn’t we then be superhuman intelligences ourselves? Of course there’s the speed difference to consider, but alternately we use a tool AI to upload ourselves and then do that augmentation. What am I missing?
I totally agree that it’s possible to enhance humans with tools. But AIs can use tools too, and by default they will. (Perhaps we can create some sort of special regulation, and enforce it, so that AI agents can’t use the tools but humans can.) My claim is not that for sure, no matter what we do, AI agents will eventually appear that are more powerful than humans; after all, the sort of regulation I mentioned above is a possibility. My claim is instead that by default, if we just keep developing more tools of various kinds and more powerful AI agents of various kinds, people will build AI agents with access to (AI versions of) all the latest tools and they will be more powerful than humans, even upgraded humans.
Analogy: Automotives eventually replaced horses as the dominant form of transportation. This happened even though horses can be upgraded via breeding, horseshoes, and various other enhancements.
Analogy: In chess there was a period where the best AIs were better than the best humans alone, but worse than the best humans with access to tools (such as analysis software based on those very AIs). But now (or so I hear) the AIs are so good that the humans only get in the way; whenever the human looks at the recommendation of its tool and second-guesses it, the human is more likely than not wrong.
I’m not convinced that human+tool is not comparable to an AGI. In fact I don’t understand why you think that. If we created tool AI that could be slotted directly into our brain as an extension of our pre-existing intelligence, wouldn’t we then be superhuman intelligences ourselves? Of course there’s the speed difference to consider, but alternately we use a tool AI to upload ourselves and then do that augmentation. What am I missing?
I totally agree that it’s possible to enhance humans with tools. But AIs can use tools too, and by default they will. (Perhaps we can create some sort of special regulation, and enforce it, so that AI agents can’t use the tools but humans can.) My claim is not that for sure, no matter what we do, AI agents will eventually appear that are more powerful than humans; after all, the sort of regulation I mentioned above is a possibility. My claim is instead that by default, if we just keep developing more tools of various kinds and more powerful AI agents of various kinds, people will build AI agents with access to (AI versions of) all the latest tools and they will be more powerful than humans, even upgraded humans.
Analogy: Automotives eventually replaced horses as the dominant form of transportation. This happened even though horses can be upgraded via breeding, horseshoes, and various other enhancements.
Analogy: In chess there was a period where the best AIs were better than the best humans alone, but worse than the best humans with access to tools (such as analysis software based on those very AIs). But now (or so I hear) the AIs are so good that the humans only get in the way; whenever the human looks at the recommendation of its tool and second-guesses it, the human is more likely than not wrong.