But then the possibilities for 2027 branch on whether there are reliable agents, which doesn’t seem knowable either way right now.
Very reliable, long-horizon agency is already in the capability overhang of Gemini 2.5 pro, perhaps even the previous-tier models (gemini 2.0 exp, sonnet 3.5/3.7, gpt-4o, grok 3, deepseek r1, llama 4). It’s just the matter of harness/agent-wrapping logic and inference-time compute budget.
Agency engineering is currently in the brute-force stage. Agent engineers over rely on a “single LLM rollout” to be robust, but also often use LLM APIs that sometimes lack certain nitty-gritty affordances for implementing reliable agency, such as “N completions” with timely self-consistency pruning and perhaps scaling N up again when model’s own uncertainty is up.
This somewhat reminds me of the early LLM scale-up era where LLM engineers over relied on “stack more layers” without digging more into the architectural details. The best example is perhaps Megatron, a trillion-parameter model from 2021 whose performance is probably abysmal relative to the 2025 models of ~10B parameters (perhaps even 1B).
So, the current agents (such as Cursor, Claude Code, Replit, Manus) are in the “Megatron era” of efficiency. In four years, even with the same raw LLM capability, agents will be very reliable.
To give a more specific example when robustness is a matter of spending more on inference, let’s consider Gemini 2.5 pro: contrary to the hype, it often misses crucial considerations or acts strangely stupidly on modestly sized contexts (less than 50k tokens). However, seeing these omissions, it’s obvious to me that if someone applied ~1k token-sized chunks of that context to 2.5-pro’s output and asked a smaller LLM (flash or flash lite) “did this part of the context properly informed that output”, flash would answer No when 2.5-pro indeed missed something important from that part of the context. This should trigger a fallback on N-completions, 2.5 self-review with smaller pieces of the context, breaking down the context hierarchically, etc.
I meant “realiable agents” in the AI 2027 sense, that is something on the order of being sufficient for automated AI research, leading to much more revenue and investment in the lead-up rather than stalling at ~$100bn per individual training system for multiple years. My point is that it’s not currently knowable if it happens imminently in 2026-2027 or at least a few years later, meaning I don’t expect that evidence exists that distinguishes these possibilities even within the leading AI companies.
Very reliable, long-horizon agency is already in the capability overhang of Gemini 2.5 pro, perhaps even the previous-tier models (gemini 2.0 exp, sonnet 3.5/3.7, gpt-4o, grok 3, deepseek r1, llama 4). It’s just the matter of harness/agent-wrapping logic and inference-time compute budget.
Agency engineering is currently in the brute-force stage. Agent engineers over rely on a “single LLM rollout” to be robust, but also often use LLM APIs that sometimes lack certain nitty-gritty affordances for implementing reliable agency, such as “N completions” with timely self-consistency pruning and perhaps scaling N up again when model’s own uncertainty is up.
This somewhat reminds me of the early LLM scale-up era where LLM engineers over relied on “stack more layers” without digging more into the architectural details. The best example is perhaps Megatron, a trillion-parameter model from 2021 whose performance is probably abysmal relative to the 2025 models of ~10B parameters (perhaps even 1B).
So, the current agents (such as Cursor, Claude Code, Replit, Manus) are in the “Megatron era” of efficiency. In four years, even with the same raw LLM capability, agents will be very reliable.
To give a more specific example when robustness is a matter of spending more on inference, let’s consider Gemini 2.5 pro: contrary to the hype, it often misses crucial considerations or acts strangely stupidly on modestly sized contexts (less than 50k tokens). However, seeing these omissions, it’s obvious to me that if someone applied ~1k token-sized chunks of that context to 2.5-pro’s output and asked a smaller LLM (flash or flash lite) “did this part of the context properly informed that output”, flash would answer No when 2.5-pro indeed missed something important from that part of the context. This should trigger a fallback on N-completions, 2.5 self-review with smaller pieces of the context, breaking down the context hierarchically, etc.
I meant “realiable agents” in the AI 2027 sense, that is something on the order of being sufficient for automated AI research, leading to much more revenue and investment in the lead-up rather than stalling at ~$100bn per individual training system for multiple years. My point is that it’s not currently knowable if it happens imminently in 2026-2027 or at least a few years later, meaning I don’t expect that evidence exists that distinguishes these possibilities even within the leading AI companies.