Deeper also means going from outputting the words “Prevent war” in many appropriate linguistic contexts to preventing war in the actual real world.[1]
If getting good real-world performance means extending present-day AI with new ways of learning (and planning too, but learning is the big one unless we go all the way to model-based RL), then whether current LLMs output “Prevent war” in response to “What would you do?” is only slightly more relevant then whether my spam filter successfully filters out scams.
Without, of course, killing all humans to prevent war. prevent climate issues, decrease poverty, and make sure all living humans have access to education.
Very different in architecture, capabilities, and appearance to an outside observer, certainly. I don’t know what you consider “fundamental.”
The atoms inside the H-100s running gpt4 don’t have little tags on them saying whether it’s “really” trying to prevent war. The difference is something that’s computed by humans as we look at the world. Because it’s sometimes useful for us to apply the intentional stance to gpt4, it’s fine to say that it’s trying to prevent war. But the caveats that comes with are still very large.
Deeper also means going from outputting the words “Prevent war” in many appropriate linguistic contexts to preventing war in the actual real world.[1]
If getting good real-world performance means extending present-day AI with new ways of learning (and planning too, but learning is the big one unless we go all the way to model-based RL), then whether current LLMs output “Prevent war” in response to “What would you do?” is only slightly more relevant then whether my spam filter successfully filters out scams.
Without, of course, killing all humans to prevent war. prevent climate issues, decrease poverty, and make sure all living humans have access to education.
Thank you for the explanation.
Would you consider a human working to prevent war fundamentally different from a gpt4 based agent working to prevent war?
Very different in architecture, capabilities, and appearance to an outside observer, certainly. I don’t know what you consider “fundamental.”
The atoms inside the H-100s running gpt4 don’t have little tags on them saying whether it’s “really” trying to prevent war. The difference is something that’s computed by humans as we look at the world. Because it’s sometimes useful for us to apply the intentional stance to gpt4, it’s fine to say that it’s trying to prevent war. But the caveats that comes with are still very large.