For the past year I’ve been thinking about the Agent vs. Tool debate (e.g. thanks to reading CAIS/Reframing Superintelligence) and also about embedded agency and mesa-optimizers and all of these topics seem very related now… I keep finding myself attracted to the following argument skeleton:
Rule 1: If you want anything unusual to happen, you gotta execute a good plan.
Rule 2: If you want a good plan, you gotta have a good planner and a good world-model.
Rule 3: If you want a good world-model, you gotta have a good learner and good data.
Rule 4: Having good data is itself an unusual happenstance, so by Rule 1 if you want good data you gotta execute a good plan.
Putting it all together: Agents are things which have good planner and learner capacities and are hooked up to actuators in some way. Perhaps they also are “seeded” with a decent world-model to start off with. Then, they get a nifty feedback loop going: They make decent plans, which allow them to get decent data, which allows them to get better world-models, which allows them to make better plans and get better data so they can get great world-models and make great plans and… etc. (The best agents will also be improving on their learning and planning algorithms! Humans do this, for example.)
Empirical conjecture: Tools suck; agents rock, and that’s why. It’s also why agenty mesa-optimizers will arise, and it’s also why humans with tools will eventually be outcompeted by agent AGI.
The ultimate test will be seeing whether the predictions it makes come true—whether agenty mesa-optimizers arise often, whether humans with tools get outcompeted by agent AGI.
In the meantime, it’s not too hard to look for confirming or disconfirming evidence. For example, the fact that militaries and corporations that make a plan and then task their subordinates with strictly following the plan invariably do worse than those who make a plan and then give their subordinates initiative and flexibility to learn and adapt on the fly… seems like confirming evidence. (See: agile development model, the importance of iteration and feedback loops in startup culture, etc.) Whereas perhaps the fact that AlphaZero is so good despite lacking a learning module is disconfirming evidence.
As for a test, well we’d need to have something that proponents and opponents agree to disagree on, and that might be hard to find. Most tests I can think of now don’t work because everyone would agree on what the probable outcome is. I think the best I can do is: Someday soon we might be able to test an agenty architecture and a non-agenty architecture in some big complex novel game environment, and this conjecture would predict that for sufficiently complex and novel environments the agenty architecture would win.
I’d agree w/ the point that giving subordinates plans and the freedom to execute them as best as they can tends to work out better, but that seems to be strongly dependent on other context, in particular the field they’re working in (ex. software engineering vs. civil engineering vs. military engineering), cultural norms (ex. is this a place where agile engineering norms have taken hold?), and reward distributions (ex. does experimenting by individuals hold the potential for big rewards, or are all rewards likely to be distributed in a normal fashion such that we don’t expect to find outliers).
My general model is in certain fields humans look more tool shaped and in others more agent shaped. For example an Uber driver when they’re executing instructions from the central command and control algo doesn’t require as much of the planning, world modeling behavior. One way this could apply to AI is that sub-agents of an agent AI would be tools.
I agree. I don’t think agents will outcompete tools in every domain; indeed in most domains perhaps specialized tools will eventually win (already, we see humans being replaced by expensive specialized machinery, or expensive human specialists, lots of places). But I still think that there will be strong competitive pressure to create agent AGI, because there are many important domains where agency is an advantage.
Expensive specialized tools are themselves learned by and embedded inside an agent to achieve goals. They’re simply meso-optimization in another guise. eg AlphaGo learns a reactive policy which does nothing which you’d recognize as ‘planning’ or ‘agentiness’ - it just maps a grid of numbers (board state) to another grid of numbers (value function estimates of a move’s value). A company, beholden to evolutionary imperatives, can implement internal ‘markets’ with ‘agents’ if it finds that useful for allocating resources across departments, or use top-down mandates if those work better, but no matter how it allocates resources, it’s all in the service of an agent, and any distinction between the ‘tool’ and ‘agent’ parts of the company is somewhat illusory.
For the past year I’ve been thinking about the Agent vs. Tool debate (e.g. thanks to reading CAIS/Reframing Superintelligence) and also about embedded agency and mesa-optimizers and all of these topics seem very related now… I keep finding myself attracted to the following argument skeleton:
Rule 1: If you want anything unusual to happen, you gotta execute a good plan.
Rule 2: If you want a good plan, you gotta have a good planner and a good world-model.
Rule 3: If you want a good world-model, you gotta have a good learner and good data.
Rule 4: Having good data is itself an unusual happenstance, so by Rule 1 if you want good data you gotta execute a good plan.
Putting it all together: Agents are things which have good planner and learner capacities and are hooked up to actuators in some way. Perhaps they also are “seeded” with a decent world-model to start off with. Then, they get a nifty feedback loop going: They make decent plans, which allow them to get decent data, which allows them to get better world-models, which allows them to make better plans and get better data so they can get great world-models and make great plans and… etc. (The best agents will also be improving on their learning and planning algorithms! Humans do this, for example.)
Empirical conjecture: Tools suck; agents rock, and that’s why. It’s also why agenty mesa-optimizers will arise, and it’s also why humans with tools will eventually be outcompeted by agent AGI.
How would you test the conjecture?
The ultimate test will be seeing whether the predictions it makes come true—whether agenty mesa-optimizers arise often, whether humans with tools get outcompeted by agent AGI.
In the meantime, it’s not too hard to look for confirming or disconfirming evidence. For example, the fact that militaries and corporations that make a plan and then task their subordinates with strictly following the plan invariably do worse than those who make a plan and then give their subordinates initiative and flexibility to learn and adapt on the fly… seems like confirming evidence. (See: agile development model, the importance of iteration and feedback loops in startup culture, etc.) Whereas perhaps the fact that AlphaZero is so good despite lacking a learning module is disconfirming evidence.
As for a test, well we’d need to have something that proponents and opponents agree to disagree on, and that might be hard to find. Most tests I can think of now don’t work because everyone would agree on what the probable outcome is. I think the best I can do is: Someday soon we might be able to test an agenty architecture and a non-agenty architecture in some big complex novel game environment, and this conjecture would predict that for sufficiently complex and novel environments the agenty architecture would win.
I’d agree w/ the point that giving subordinates plans and the freedom to execute them as best as they can tends to work out better, but that seems to be strongly dependent on other context, in particular the field they’re working in (ex. software engineering vs. civil engineering vs. military engineering), cultural norms (ex. is this a place where agile engineering norms have taken hold?), and reward distributions (ex. does experimenting by individuals hold the potential for big rewards, or are all rewards likely to be distributed in a normal fashion such that we don’t expect to find outliers).
My general model is in certain fields humans look more tool shaped and in others more agent shaped. For example an Uber driver when they’re executing instructions from the central command and control algo doesn’t require as much of the planning, world modeling behavior. One way this could apply to AI is that sub-agents of an agent AI would be tools.
I agree. I don’t think agents will outcompete tools in every domain; indeed in most domains perhaps specialized tools will eventually win (already, we see humans being replaced by expensive specialized machinery, or expensive human specialists, lots of places). But I still think that there will be strong competitive pressure to create agent AGI, because there are many important domains where agency is an advantage.
Expensive specialized tools are themselves learned by and embedded inside an agent to achieve goals. They’re simply meso-optimization in another guise. eg AlphaGo learns a reactive policy which does nothing which you’d recognize as ‘planning’ or ‘agentiness’ - it just maps a grid of numbers (board state) to another grid of numbers (value function estimates of a move’s value). A company, beholden to evolutionary imperatives, can implement internal ‘markets’ with ‘agents’ if it finds that useful for allocating resources across departments, or use top-down mandates if those work better, but no matter how it allocates resources, it’s all in the service of an agent, and any distinction between the ‘tool’ and ‘agent’ parts of the company is somewhat illusory.