There’s discussion on why Tool AIs are expected to become agents; one of the biggest arguments is that agents are likely to be more effective than tools. If you have a tool, you can ask it what you should do in order to get what you want; if you have an agent, you can just ask it to get you the things that you want. Compare Google Maps vs. self-driving cars: Google Maps is great, but if you get the car to be an agent, you get all kinds of other benefits.
It would be great if everyone did stick to just building tool AIs. But if everyone knows that they could get an advantage over their competitors by building an agent, it’s unlikely that everyone would just voluntarily restrain themselves due to caution.
Also it’s not clear that there’s any sharp dividing line between AGI and non-AGI AI; if you’ve been building agentic AIs all along (like people are doing right now) and they slowly get smarter and smarter, how do you know when’s the point when you should stop building agents and should switch to only building tools? Especially when you know that your competitors might not be as cautious as you are, so if you stop then they might go further and their smarter agent AIs will outcompete yours, meaning the world is no safer and you’ve lost to them? (And at the same time, they are applying the same logic for why they should not stop, since they don’t know that you can be trusted to stop.)
Would you say a self-driving car is a tool AI or agentic AI? I can see how the self-driving car is a bit more agentic, but as long as it only drives when you tell it to, I would consider it a tool. But I can also see that the border is a bit blurry.
If self-driving cars are not considered agentic, do you have examples of people attempting to make agent AIs?
As you say, it’s more of a continuum than a binary. A self-driving car is more agenty than Google Maps, and a self-driving car that was making independent choices of where to drive would be more agentic still.
People are generally trying to make all kinds of more agentic AIs, because more agentic AIs are so much more useful.
Stock-trading bots that automatically buy and sell stock are more agenty than software that just tells human traders what to buy, and preferred because a bot without a human in the loop can outcompete a slower system that does have the slow human making decisions.
An AI autonomously optimizing data center cooling is more agenty than one that just tells human operators where to make adjustments and is preferred… that article doesn’t actually make it explicit why they switched to an autonomously operating system, but “because it can make lots of small tweaks humans wouldn’t bother with and is therefore more effective” seems to be implied?
The military has expressed an interest in making their drones more autonomous (agenty) rather than being remotely operated. This is for several reasons, including the fact that remote-operated drones can be jammed, and because having a human in the loop slows down response time if fighting against an enemy drone.
All kinds of personal assistant software that anticipates your needs and actively tries to help you is more agenty than software that just passively waits for you to use it. E.g. once when I was visiting a friend my phone popped up a notification about the last bus home departing soon. Some people want their phones to be more agentic like this because it’s convenient if you have someone actively anticipating your needs and ensuring that they get taken care of for you.
Agent AI vs. Tool AI.
There’s discussion on why Tool AIs are expected to become agents; one of the biggest arguments is that agents are likely to be more effective than tools. If you have a tool, you can ask it what you should do in order to get what you want; if you have an agent, you can just ask it to get you the things that you want. Compare Google Maps vs. self-driving cars: Google Maps is great, but if you get the car to be an agent, you get all kinds of other benefits.
It would be great if everyone did stick to just building tool AIs. But if everyone knows that they could get an advantage over their competitors by building an agent, it’s unlikely that everyone would just voluntarily restrain themselves due to caution.
Also it’s not clear that there’s any sharp dividing line between AGI and non-AGI AI; if you’ve been building agentic AIs all along (like people are doing right now) and they slowly get smarter and smarter, how do you know when’s the point when you should stop building agents and should switch to only building tools? Especially when you know that your competitors might not be as cautious as you are, so if you stop then they might go further and their smarter agent AIs will outcompete yours, meaning the world is no safer and you’ve lost to them? (And at the same time, they are applying the same logic for why they should not stop, since they don’t know that you can be trusted to stop.)
Would you say a self-driving car is a tool AI or agentic AI? I can see how the self-driving car is a bit more agentic, but as long as it only drives when you tell it to, I would consider it a tool. But I can also see that the border is a bit blurry.
If self-driving cars are not considered agentic, do you have examples of people attempting to make agent AIs?
As you say, it’s more of a continuum than a binary. A self-driving car is more agenty than Google Maps, and a self-driving car that was making independent choices of where to drive would be more agentic still.
People are generally trying to make all kinds of more agentic AIs, because more agentic AIs are so much more useful.
Stock-trading bots that automatically buy and sell stock are more agenty than software that just tells human traders what to buy, and preferred because a bot without a human in the loop can outcompete a slower system that does have the slow human making decisions.
An AI autonomously optimizing data center cooling is more agenty than one that just tells human operators where to make adjustments and is preferred… that article doesn’t actually make it explicit why they switched to an autonomously operating system, but “because it can make lots of small tweaks humans wouldn’t bother with and is therefore more effective” seems to be implied?
The military has expressed an interest in making their drones more autonomous (agenty) rather than being remotely operated. This is for several reasons, including the fact that remote-operated drones can be jammed, and because having a human in the loop slows down response time if fighting against an enemy drone.
All kinds of personal assistant software that anticipates your needs and actively tries to help you is more agenty than software that just passively waits for you to use it. E.g. once when I was visiting a friend my phone popped up a notification about the last bus home departing soon. Some people want their phones to be more agentic like this because it’s convenient if you have someone actively anticipating your needs and ensuring that they get taken care of for you.