I want to say “yes, but this is different”, but not in the sense “I acknowledge existence of your evidence, but ignore it”.
My intuition tells me that we don’t “induce” taskiness in the modern systems, it just happens because we build them not general enough. It probably won’t hold when we start buliding models of capable agents in natural environment.
Certainly possible. Though we seem to be continually marching down the list of tasks we once thought “can only be done with systems that are really general/agentic/intelligent” (think: spatial planning, playing games, proving theorems, understanding language, competitive programming...) and finding that, nope, actually we can engineer systems that have the distilled essence of that capability.
That makes a deflationary account of cognition, where we never see the promised reduction into “one big insight”, but rather chunks of the AI field continue to break off & become unsexy but useful techniques (as happened with planning algorithms, compilers, functional programming, knowledge graphs etc., no longer even considered “real AI”), increasingly likely in my eyes. Maybe economic forces push against this, but I’m kinda doubtful, seeing how hard building agenty AI is proving and how useful these decomposed tasky AIs are looking.
Decomposed tasky AI’s are pretty useful. Given we don’t yet know how to build powerful agents, they are better than nothing. This is entirely consistent with a world where, once agenty AI is developed, it beats the pants of tasky AI.
I want to say “yes, but this is different”, but not in the sense “I acknowledge existence of your evidence, but ignore it”. My intuition tells me that we don’t “induce” taskiness in the modern systems, it just happens because we build them not general enough. It probably won’t hold when we start buliding models of capable agents in natural environment.
Certainly possible. Though we seem to be continually marching down the list of tasks we once thought “can only be done with systems that are really general/agentic/intelligent” (think: spatial planning, playing games, proving theorems, understanding language, competitive programming...) and finding that, nope, actually we can engineer systems that have the distilled essence of that capability.
That makes a deflationary account of cognition, where we never see the promised reduction into “one big insight”, but rather chunks of the AI field continue to break off & become unsexy but useful techniques (as happened with planning algorithms, compilers, functional programming, knowledge graphs etc., no longer even considered “real AI”), increasingly likely in my eyes. Maybe economic forces push against this, but I’m kinda doubtful, seeing how hard building agenty AI is proving and how useful these decomposed tasky AIs are looking.
Decomposed tasky AI’s are pretty useful. Given we don’t yet know how to build powerful agents, they are better than nothing. This is entirely consistent with a world where, once agenty AI is developed, it beats the pants of tasky AI.