“Want” seems ill-defined in this discussion. To the extent it is defined in the OP, it seems to be “able to pursue long-term goals”, at which point tautologies are inevitable. The discussion gives me strong stochastic parrot / “it’s just predicting next tokens not really thinking” vibes, where want/think are je ne sais quoi words to describe the human experience and provide comfort (or at least a shorthand explanation) for why LLMs aren’t exhibiting advanced human behaviors. I have little doubt many are trying to optimize for long-term planning and that AI systems will exhibit increasingly better long-term planning capabilities over time, but have no confidence whether that will coincide with increases in “want”, mainly because I don’t know what that means. Just my $0.02, as someone with no technical or linguistics background.
“Want” seems ill-defined in this discussion. To the extent it is defined in the OP, it seems to be “able to pursue long-term goals”, at which point tautologies are inevitable. The discussion gives me strong stochastic parrot / “it’s just predicting next tokens not really thinking” vibes, where want/think are je ne sais quoi words to describe the human experience and provide comfort (or at least a shorthand explanation) for why LLMs aren’t exhibiting advanced human behaviors. I have little doubt many are trying to optimize for long-term planning and that AI systems will exhibit increasingly better long-term planning capabilities over time, but have no confidence whether that will coincide with increases in “want”, mainly because I don’t know what that means. Just my $0.02, as someone with no technical or linguistics background.