If something is good at replicating, then there will be more of that thing, this creates a selection effect for things that are good at replicating. The effects of this can be observed in biology and memetics.
Maybe self replication can be seen as an agentic system with the goal of self replicating? In this particular question all uncertainty comes from “agent” being a fuzzy concept, and not from any uncertainty about the world. So answering this question will be a choice of perspective, not information about the world.
Either way, the type of agency I’m mainly interested in is the type of agency that have other goals than just self replication. Although maybe there are things to be learned from the special case of having self replication as a goal?
If the AI learns my values then this is a replication of my values. But there are also examples of magic agentic force where my values are not copied at any point along the way.
Looking at how society is transferred between generations, might have some clues to value learning? But I’m less optimistic about looking at what is similar between self replication in general, because I think I already know this, and also, it seems to be one abstraction level to high, i.e. the similarity are properties above the mechanistic details, and those details is what I want.
To be agentic requires an agent that has parts that are included in it and an outside, the environment, that is acted on. Also, the agent needs to persist over time. Thus it has to be a replicator. I’m not sure you can have agents that are no replicators or replicators that are not agentic/control (parts of) their environment. When we say replicator, we focus on the replicating machinery, and when we talk about an agent, we focus on the control of the environment. But it’s two sides of the coin, maybe even a formal dual can be discovered.
All existing agents are evolved replicators which means that all their agency is focused on replication: Their replication, the replication of other coevolved agents, and the replication of non-agentic parts of the environment. There will be more or less accidental complexity (accidental in relation to the selection pressure of evolution; maybe a better term would be unforced structure).
the type of agency I’m mainly interested in is the type of agency that have other goals than just self replication
The “just” does a lot of work in that sentence. I think it can only refer to unforced structures. As humans, we care about that—but “caring” is to the larger part supporting replication.
If something is good at replicating, then there will be more of that thing, this creates a selection effect for things that are good at replicating. The effects of this can be observed in biology and memetics.
Maybe self replication can be seen as an agentic system with the goal of self replicating? In this particular question all uncertainty comes from “agent” being a fuzzy concept, and not from any uncertainty about the world. So answering this question will be a choice of perspective, not information about the world.
Either way, the type of agency I’m mainly interested in is the type of agency that have other goals than just self replication. Although maybe there are things to be learned from the special case of having self replication as a goal?
If the AI learns my values then this is a replication of my values. But there are also examples of magic agentic force where my values are not copied at any point along the way.
Looking at how society is transferred between generations, might have some clues to value learning? But I’m less optimistic about looking at what is similar between self replication in general, because I think I already know this, and also, it seems to be one abstraction level to high, i.e. the similarity are properties above the mechanistic details, and those details is what I want.
To be agentic requires an agent that has parts that are included in it and an outside, the environment, that is acted on. Also, the agent needs to persist over time. Thus it has to be a replicator. I’m not sure you can have agents that are no replicators or replicators that are not agentic/control (parts of) their environment. When we say replicator, we focus on the replicating machinery, and when we talk about an agent, we focus on the control of the environment. But it’s two sides of the coin, maybe even a formal dual can be discovered.
All existing agents are evolved replicators which means that all their agency is focused on replication: Their replication, the replication of other coevolved agents, and the replication of non-agentic parts of the environment. There will be more or less accidental complexity (accidental in relation to the selection pressure of evolution; maybe a better term would be unforced structure).
The “just” does a lot of work in that sentence. I think it can only refer to unforced structures. As humans, we care about that—but “caring” is to the larger part supporting replication.