The ultimate test will be seeing whether the predictions it makes come true—whether agenty mesa-optimizers arise often, whether humans with tools get outcompeted by agent AGI.
In the meantime, it’s not too hard to look for confirming or disconfirming evidence. For example, the fact that militaries and corporations that make a plan and then task their subordinates with strictly following the plan invariably do worse than those who make a plan and then give their subordinates initiative and flexibility to learn and adapt on the fly… seems like confirming evidence. (See: agile development model, the importance of iteration and feedback loops in startup culture, etc.) Whereas perhaps the fact that AlphaZero is so good despite lacking a learning module is disconfirming evidence.
As for a test, well we’d need to have something that proponents and opponents agree to disagree on, and that might be hard to find. Most tests I can think of now don’t work because everyone would agree on what the probable outcome is. I think the best I can do is: Someday soon we might be able to test an agenty architecture and a non-agenty architecture in some big complex novel game environment, and this conjecture would predict that for sufficiently complex and novel environments the agenty architecture would win.
I’d agree w/ the point that giving subordinates plans and the freedom to execute them as best as they can tends to work out better, but that seems to be strongly dependent on other context, in particular the field they’re working in (ex. software engineering vs. civil engineering vs. military engineering), cultural norms (ex. is this a place where agile engineering norms have taken hold?), and reward distributions (ex. does experimenting by individuals hold the potential for big rewards, or are all rewards likely to be distributed in a normal fashion such that we don’t expect to find outliers).
My general model is in certain fields humans look more tool shaped and in others more agent shaped. For example an Uber driver when they’re executing instructions from the central command and control algo doesn’t require as much of the planning, world modeling behavior. One way this could apply to AI is that sub-agents of an agent AI would be tools.
I agree. I don’t think agents will outcompete tools in every domain; indeed in most domains perhaps specialized tools will eventually win (already, we see humans being replaced by expensive specialized machinery, or expensive human specialists, lots of places). But I still think that there will be strong competitive pressure to create agent AGI, because there are many important domains where agency is an advantage.
Expensive specialized tools are themselves learned by and embedded inside an agent to achieve goals. They’re simply meso-optimization in another guise. eg AlphaGo learns a reactive policy which does nothing which you’d recognize as ‘planning’ or ‘agentiness’ - it just maps a grid of numbers (board state) to another grid of numbers (value function estimates of a move’s value). A company, beholden to evolutionary imperatives, can implement internal ‘markets’ with ‘agents’ if it finds that useful for allocating resources across departments, or use top-down mandates if those work better, but no matter how it allocates resources, it’s all in the service of an agent, and any distinction between the ‘tool’ and ‘agent’ parts of the company is somewhat illusory.
The ultimate test will be seeing whether the predictions it makes come true—whether agenty mesa-optimizers arise often, whether humans with tools get outcompeted by agent AGI.
In the meantime, it’s not too hard to look for confirming or disconfirming evidence. For example, the fact that militaries and corporations that make a plan and then task their subordinates with strictly following the plan invariably do worse than those who make a plan and then give their subordinates initiative and flexibility to learn and adapt on the fly… seems like confirming evidence. (See: agile development model, the importance of iteration and feedback loops in startup culture, etc.) Whereas perhaps the fact that AlphaZero is so good despite lacking a learning module is disconfirming evidence.
As for a test, well we’d need to have something that proponents and opponents agree to disagree on, and that might be hard to find. Most tests I can think of now don’t work because everyone would agree on what the probable outcome is. I think the best I can do is: Someday soon we might be able to test an agenty architecture and a non-agenty architecture in some big complex novel game environment, and this conjecture would predict that for sufficiently complex and novel environments the agenty architecture would win.
I’d agree w/ the point that giving subordinates plans and the freedom to execute them as best as they can tends to work out better, but that seems to be strongly dependent on other context, in particular the field they’re working in (ex. software engineering vs. civil engineering vs. military engineering), cultural norms (ex. is this a place where agile engineering norms have taken hold?), and reward distributions (ex. does experimenting by individuals hold the potential for big rewards, or are all rewards likely to be distributed in a normal fashion such that we don’t expect to find outliers).
My general model is in certain fields humans look more tool shaped and in others more agent shaped. For example an Uber driver when they’re executing instructions from the central command and control algo doesn’t require as much of the planning, world modeling behavior. One way this could apply to AI is that sub-agents of an agent AI would be tools.
I agree. I don’t think agents will outcompete tools in every domain; indeed in most domains perhaps specialized tools will eventually win (already, we see humans being replaced by expensive specialized machinery, or expensive human specialists, lots of places). But I still think that there will be strong competitive pressure to create agent AGI, because there are many important domains where agency is an advantage.
Expensive specialized tools are themselves learned by and embedded inside an agent to achieve goals. They’re simply meso-optimization in another guise. eg AlphaGo learns a reactive policy which does nothing which you’d recognize as ‘planning’ or ‘agentiness’ - it just maps a grid of numbers (board state) to another grid of numbers (value function estimates of a move’s value). A company, beholden to evolutionary imperatives, can implement internal ‘markets’ with ‘agents’ if it finds that useful for allocating resources across departments, or use top-down mandates if those work better, but no matter how it allocates resources, it’s all in the service of an agent, and any distinction between the ‘tool’ and ‘agent’ parts of the company is somewhat illusory.