Factory robots and high-frequency traders are definitely agent AI. They are designed to be, and they frankly make no sense in any other way.
The factory robot does not ask you whether it should move three millimeters to the left; it does not suggest that perhaps moving three millimeters to the left would be wise; it moves three millimeters to the left, because that is what its code tells it to do at this phase in the welding process.
The high-frequency trader even has a utility function: It’s called profit, and it seeks out methods of trading options and derivatives to maximize that utility function.
In both cases, these are agents, because they act directly on the world itself, without a human intermediary approving their decisions.
The only reason I’d even hesitate to call them agent AIs is that they are so stupid; the factory robot has hardly any degrees of freedom at all, and the high-frequency trader only has choices between different types of financial securities (it never asks whether it should become an entrepreneur for instance). But this is a question of the AI part; they’re definitely agents rather than tools.
I do like your quick thought though:
Quick thought: If it’s hard to get AGIs to generate plans that people like, then it would seem that AGIs fall into this exception class, since in that case humans can do a better job of telling whether they like a given plan.
Yes, it makes a good deal of sense that we would want some human approval involved in the process of restructuring human society.
They’re clearly agents given Holden’s definitions. Why are they clearly agents given my proposed definition? (Normally I don’t see a point in arguing about definitions, but I think my proposed definition lines up with something of interest: things that are especially likely to become dangerous if they’re more powerful.)
Factory robots and high-frequency traders are definitely agent AI. They are designed to be, and they frankly make no sense in any other way.
The factory robot does not ask you whether it should move three millimeters to the left; it does not suggest that perhaps moving three millimeters to the left would be wise; it moves three millimeters to the left, because that is what its code tells it to do at this phase in the welding process.
The high-frequency trader even has a utility function: It’s called profit, and it seeks out methods of trading options and derivatives to maximize that utility function.
In both cases, these are agents, because they act directly on the world itself, without a human intermediary approving their decisions.
The only reason I’d even hesitate to call them agent AIs is that they are so stupid; the factory robot has hardly any degrees of freedom at all, and the high-frequency trader only has choices between different types of financial securities (it never asks whether it should become an entrepreneur for instance). But this is a question of the AI part; they’re definitely agents rather than tools.
I do like your quick thought though:
Yes, it makes a good deal of sense that we would want some human approval involved in the process of restructuring human society.
They’re clearly agents given Holden’s definitions. Why are they clearly agents given my proposed definition? (Normally I don’t see a point in arguing about definitions, but I think my proposed definition lines up with something of interest: things that are especially likely to become dangerous if they’re more powerful.)