Concepts are generally clusters and I would say that being well-predicted by the Intentional Strategy is one aspect of what is meant by agency.
Another aspect relates to the interior functioning of an object. A very simple model would be to say that we generally expect the object to have a) some goals, b) counterfactual modeling abilities and c) to pursue the goals based on these modeling abilities. This definition is less appealing because it is much more vague and each of the elements in the previous sentence would need further clarification; however this doesn’t mean that it is any less of a part of what people are generally imagining when they think of an agent. Humans come pre-equipped with at least a vague and casual sense of what these kinds of terms mean, so the above description is already sufficient for us to say, for example, that a metal ball that seems agentic according to the Intentional Stance because it is controlled by a magnet isn’t agentic (on its own) according to the interior functioning stance.
I don’t have time to expand on every aspect here (especially since these definitions would require further expansion; and so on), so I’ll just focus on the notion of goals. Here are some relevant considerations for being considered as a goal:
Human-like goals are more likely to be considered goals than, for example, printing out every number that meets 20 conditions without falling into one of 300 exceptions. However, we would be more likely to accept this as a goal if we were told that there was a simple reason why we were performing a weird analysis (ie. legal compliance) then we’d be more likely to accept this as a goal.
We are more likely to consider a system to have goals if it represents them simply, but again, if we’re given a sufficient reason we might still accept it as a goal (for example if we were told that the representation was due to the hard drive being protected by encryption).
The goals should be used to determine behavior, although we’re now moving to part c) of the interior functioning requirements
Note that a large part of the challenge is that we can’t imagine every way of interpreting a system, so it would be very easy to say that a system has goals if it meets these three conditions where the conditions are broad enough that everything might be considered to have a goal. So what usually ends up happening is that we pick out properties that would seem to include most things we consider as having goals and seemingly excludes things we generally don’t consider to have goals (although we normally just handwave here). And then if someone informs us that our definition picks out too much, then we narrow it by adding tighter conditions. So this isn’t really an objective process.
Again, our definitions have used vague language, but that’s just how our mind works.
Concepts are generally clusters and I would say that being well-predicted by the Intentional Strategy is one aspect of what is meant by agency.
Another aspect relates to the interior functioning of an object. A very simple model would be to say that we generally expect the object to have a) some goals, b) counterfactual modeling abilities and c) to pursue the goals based on these modeling abilities. This definition is less appealing because it is much more vague and each of the elements in the previous sentence would need further clarification; however this doesn’t mean that it is any less of a part of what people are generally imagining when they think of an agent. Humans come pre-equipped with at least a vague and casual sense of what these kinds of terms mean, so the above description is already sufficient for us to say, for example, that a metal ball that seems agentic according to the Intentional Stance because it is controlled by a magnet isn’t agentic (on its own) according to the interior functioning stance.
I don’t have time to expand on every aspect here (especially since these definitions would require further expansion; and so on), so I’ll just focus on the notion of goals. Here are some relevant considerations for being considered as a goal:
Human-like goals are more likely to be considered goals than, for example, printing out every number that meets 20 conditions without falling into one of 300 exceptions. However, we would be more likely to accept this as a goal if we were told that there was a simple reason why we were performing a weird analysis (ie. legal compliance) then we’d be more likely to accept this as a goal.
We are more likely to consider a system to have goals if it represents them simply, but again, if we’re given a sufficient reason we might still accept it as a goal (for example if we were told that the representation was due to the hard drive being protected by encryption).
The goals should be used to determine behavior, although we’re now moving to part c) of the interior functioning requirements
Note that a large part of the challenge is that we can’t imagine every way of interpreting a system, so it would be very easy to say that a system has goals if it meets these three conditions where the conditions are broad enough that everything might be considered to have a goal. So what usually ends up happening is that we pick out properties that would seem to include most things we consider as having goals and seemingly excludes things we generally don’t consider to have goals (although we normally just handwave here). And then if someone informs us that our definition picks out too much, then we narrow it by adding tighter conditions. So this isn’t really an objective process.
Again, our definitions have used vague language, but that’s just how our mind works.