You mention the distinction between agent-like architecture and agent-like behavior (which I find similar to my distinction between selection and control), but how does the concept of A(Θ)-morphism account for this distinction? I have a sense that (formalized) versions of A(Θ)-morphism are going to be more useful (or easier?) for the behavioral side, though it isn’t really clear.
I have a sense that (formalized) versions of A(Θ)-morphism are going to be more useful (or easier?) for the behavioral side, though it isn’t really clear.
I think A(Θ)-morphisation is primarily useful for describing what we often mean when we say “agency”. In particular, I view this as distinct from which concepts we should be thinking about in this space. (I think the promising candidates include learning that Vanessa points to in her comment, optimization, search, and the concepts in the second part of my post.)
However, I think it might also serve as a useful part of the language for describing (non) agent-like behavior. For example, we might want to SGD-morphise an ecoli bacteria independently of whether it actually implements some form of stochastic gradient descent w.r.t. the concentration of some chemicals in the environment.
You mention the distinction between agent-like architecture and agent-like behavior (which I find similar to my distinction between selection and control), but how does the concept of A(Θ)-morphism account for this distinction?
I think of agent-like architectures as something objective, or related to the territory. In contrast, agent-like behavior is something subjective, something in the map. Importantly, agent-like behavior, or the lack of it, of some X is something that exists in the map of some entityY (where often Y≠X).
The selection/control distinction seems related, but not quite similar to me. Am I missing something there?
I think of agent-like architectures as something objective, or related to the territory. In contrast, agent-like behavior is something subjective, something in the map. Importantly, agent-like behavior, or the lack of it, of some X is something that exists in the map of some entity Y (where often Y≠X).
The selection/control distinction seems related, but not quite similar to me. Am I missing something there?
A(Θ)-morphism seems to me to involve both agent-like architecture and agent-like behavior, because it just talks about prediction generally. Mostly I was asking if you were trying to point it one way or the other (we could talk about prediction-of-internals exclusively, to point at structure, or prediction-of-external exclusively, to talk about behavior—I was unsure whether you were trying to do one of those things).
Since you say that you are trying to formalize how we informally talk, rather than how we should, I guess you weren’t trying to make A(Θ)-morphism get at this distinction at all, and were separately mentioning the distinction as one which should be made.
You mention the distinction between agent-like architecture and agent-like behavior (which I find similar to my distinction between selection and control), but how does the concept of A(Θ)-morphism account for this distinction? I have a sense that (formalized) versions of A(Θ)-morphism are going to be more useful (or easier?) for the behavioral side, though it isn’t really clear.
I think A(Θ)-morphisation is primarily useful for describing what we often mean when we say “agency”. In particular, I view this as distinct from which concepts we should be thinking about in this space. (I think the promising candidates include learning that Vanessa points to in her comment, optimization, search, and the concepts in the second part of my post.)
However, I think it might also serve as a useful part of the language for describing (non) agent-like behavior. For example, we might want to SGD-morphise an ecoli bacteria independently of whether it actually implements some form of stochastic gradient descent w.r.t. the concentration of some chemicals in the environment.
I think of agent-like architectures as something objective, or related to the territory. In contrast, agent-like behavior is something subjective, something in the map. Importantly, agent-like behavior, or the lack of it, of some X is something that exists in the map of some entity Y (where often Y≠X).
The selection/control distinction seems related, but not quite similar to me. Am I missing something there?
A(Θ)-morphism seems to me to involve both agent-like architecture and agent-like behavior, because it just talks about prediction generally. Mostly I was asking if you were trying to point it one way or the other (we could talk about prediction-of-internals exclusively, to point at structure, or prediction-of-external exclusively, to talk about behavior—I was unsure whether you were trying to do one of those things).
Since you say that you are trying to formalize how we informally talk, rather than how we should, I guess you weren’t trying to make A(Θ)-morphism get at this distinction at all, and were separately mentioning the distinction as one which should be made.
I agree with your summary :). The claim was that humans often predict behavior by assuming that something has a particular architecture.
(And some confusions about agency seem to appear precisely because of not making the architecture/behavior distinction.)