For what it’s worth, I’m skeptical that the dichotomy you set up is meaningful, or coherent. For example, I tend to think future AI will be both “like today’s AI but better” and “like the arrival of a new intelligent species on our planet”. I don’t see any contradiction in those statements.
To the extent the two columns evoke different images of future AI, I think it mostly reflects a smooth, quantitative difference: how many iterations of improvement are we talking? After you make the context windows sufficiently long, add a few more modalities, give them a robot body, and improve their reasoning skills, LLMs will just look a lot like “a new intelligent species on our planet”. Likewise, agency exists on a spectrum, and will likely be increased incrementally. The point at which you start to call an LLM an “agent” rather than a “tool” is subjective. This just seems natural to me, and I feel I see a clear path forward from current AI to the right-column AI.
I think you’re not the target audience for this post.
Pick a random person on the street who has used chatGPT, and ask them to describe a world in which we have AI that is “like chatGPT but better”. I think they’ll describe a world very much like today’s, but where chatGPT hallucinates less and writes better essays. I really don’t think they’ll describe the right-column world. If you’re imagining the right-column world, then great! Again, you’re not the target audience.
For what it’s worth, I’m skeptical that the dichotomy you set up is meaningful, or coherent. For example, I tend to think future AI will be both “like today’s AI but better” and “like the arrival of a new intelligent species on our planet”. I don’t see any contradiction in those statements.
To the extent the two columns evoke different images of future AI, I think it mostly reflects a smooth, quantitative difference: how many iterations of improvement are we talking? After you make the context windows sufficiently long, add a few more modalities, give them a robot body, and improve their reasoning skills, LLMs will just look a lot like “a new intelligent species on our planet”. Likewise, agency exists on a spectrum, and will likely be increased incrementally. The point at which you start to call an LLM an “agent” rather than a “tool” is subjective. This just seems natural to me, and I feel I see a clear path forward from current AI to the right-column AI.
I think you’re not the target audience for this post.
Pick a random person on the street who has used chatGPT, and ask them to describe a world in which we have AI that is “like chatGPT but better”. I think they’ll describe a world very much like today’s, but where chatGPT hallucinates less and writes better essays. I really don’t think they’ll describe the right-column world. If you’re imagining the right-column world, then great! Again, you’re not the target audience.