There seems to be a core assumption that General Intelligence == Agency. While agentic behavior can come out of general intelligent systems (us), we do not know if it’s actually required.
Yeah, it is—just part of the definition. Intelligence is steering the future—which requires both predicting possible futures and then selecting some over others.
GPT is a very powerful AI system, and yet it appears to be systematically myopic and non-agentic. And yet non-myopic behavior seems to be exhibited. It “plans” for how words can fit into sentences, how sentences fit into paragraphs, and paragraphs fit into stories. All of this occurs despite it only being “concerned” with predicting the next token.
GPT by itself is completely non agentic—it is purely a simulator. However any simulator of humans will then necessarily have agentic simulacra. That is just required for accurate simulation, but it doesn’t make the simulator itself an agent. But humans could use a powerful simulator to enhance their decision making, at which point the simulator and the human become a cybernetic agent.
That seems like a really limiting definition of intelligence. Stephen Hawking, even when he was very disabled, was certainly intelligent. However, his ability to be agentic was only possible thanks to the technology he relied on (his wheelchair and his speaking device). If that had been taken away from him, he would no longer have had any ability to alter the future, but he would certainly still have been just as intelligent.
Perhaps it’s better to avoid the word intelligence, then. Semantics isn’t really important. What is important is I can imagine a non-agentic simulator or some other entity having severely transformative chances, some of which could be catastrophic.
Yeah, it is—just part of the definition. Intelligence is steering the future—which requires both predicting possible futures and then selecting some over others.
GPT by itself is completely non agentic—it is purely a simulator. However any simulator of humans will then necessarily have agentic simulacra. That is just required for accurate simulation, but it doesn’t make the simulator itself an agent. But humans could use a powerful simulator to enhance their decision making, at which point the simulator and the human become a cybernetic agent.
That seems like a really limiting definition of intelligence. Stephen Hawking, even when he was very disabled, was certainly intelligent. However, his ability to be agentic was only possible thanks to the technology he relied on (his wheelchair and his speaking device). If that had been taken away from him, he would no longer have had any ability to alter the future, but he would certainly still have been just as intelligent.
It’s just the difference between potential and actualized.
Perhaps it’s better to avoid the word intelligence, then. Semantics isn’t really important. What is important is I can imagine a non-agentic simulator or some other entity having severely transformative chances, some of which could be catastrophic.