Embedded agents have a spatial extent. If we use the analogy between physical spacetime and a domain of computation of environment, this offers interesting interpretations for some terms.
In a domain, counterfactuals might be seen as points/events/observations that are incomparable in specialization order, that is points that are not in each other’s logical future. Via the spacetime analogy, this is the same as the points being space-like separated. This motivates calling collections of mutually counterfactual (incomparable) events logical space, in the same sense as events comparable in specialization order follow logical time. (Some other non-Frechet spaces would likely give more interesting space-like subspaces than a domain typical for program semantics.)
An embedded agent extant in logical space of an environment (at a particular time) is then a collection of counterfactuals. In this view, an agent is not a specific computation, but rather a collection of possible alternative behaviors/observations/events of an environment (resulting from multiple different computations), events that are counterfactual to each other. The logical space an agent occupies comprises the behaviors/observations/events (partial-states-at-a-time) of possible environments where the agent has influence.
In this view, counterfactuals are not merely phantasmal decision theory ideas developed to make sure that reality doesn’t look like them, hypothetical threats that should never obtain in actuality. Instead, they are reified as equals to reality, as parts of the agent, and an agent’s description is incomplete without them. This is not as obvious as with parts of a physical machine because usually each small part of a machine doesn’t contain a precise description of the whole machine. With agents, an actual agent suggests quite strongly what its counterfactual behaviors would be in the adjacent possible environments, at least given a decision theory that interprets such things. So this resembles a biological organism where each cell has a blueprint for the whole body, each expression of counterfactual behavior of an embedded agent has the whole design of the agent sufficient to reconstruct its behavior in the other counterfactuals. But this point of view suggests that this is not a necessary property of embedded agents, that counterfactuals might have independent content, other parts of a larger design.
For counterfactuals in decision theory, this cashes out as imperfect ability of an agent to know what it does in counterfactuals, or as coordination with other agents that have different designs in different counterfactuals, acausal trade across logical space. So there is essentially nothing new, the notion of “logical space” and of agents having extent in logical space adds up to normality, extending the title of a singular “agent” to a collective of agents with different designs that are mutually counterfactual and are engaged in acausal trade with each other, parts of the collective. It is natural to treat different parties engaged in acausal trade as parts of a whole since they interact and influence each other’s behavior. With sufficient integration, it becomes more central to call the whole collective “an agent” instead of privileging views that only focus on one part (counterfactual) at a time.
Logical space is an unusual notion of counterfactuals, because different points of a logical space can have a common logical future, that is different counterfactuals can contribute to the same future logical event, be in that event’s past. This is not surprising given acausal trade and predictors that ask what a given agent/computation does in multiple counterfactual situations. But it usefully runs counter to the impression that counterfactuals necessarily irrevocably diverge from each other, embed a mutual contradiction that prevents them from ever being reunited in a single possibility.
Embedded agents have a spatial extent. If we use the analogy between physical spacetime and a domain of computation of environment, this offers interesting interpretations for some terms.
In a domain, counterfactuals might be seen as points/events/observations that are incomparable in specialization order, that is points that are not in each other’s logical future. Via the spacetime analogy, this is the same as the points being space-like separated. This motivates calling collections of mutually counterfactual (incomparable) events logical space, in the same sense as events comparable in specialization order follow logical time. (Some other non-Frechet spaces would likely give more interesting space-like subspaces than a domain typical for program semantics.)
An embedded agent extant in logical space of an environment (at a particular time) is then a collection of counterfactuals. In this view, an agent is not a specific computation, but rather a collection of possible alternative behaviors/observations/events of an environment (resulting from multiple different computations), events that are counterfactual to each other. The logical space an agent occupies comprises the behaviors/observations/events (partial-states-at-a-time) of possible environments where the agent has influence.
In this view, counterfactuals are not merely phantasmal decision theory ideas developed to make sure that reality doesn’t look like them, hypothetical threats that should never obtain in actuality. Instead, they are reified as equals to reality, as parts of the agent, and an agent’s description is incomplete without them. This is not as obvious as with parts of a physical machine because usually each small part of a machine doesn’t contain a precise description of the whole machine. With agents, an actual agent suggests quite strongly what its counterfactual behaviors would be in the adjacent possible environments, at least given a decision theory that interprets such things. So this resembles a biological organism where each cell has a blueprint for the whole body, each expression of counterfactual behavior of an embedded agent has the whole design of the agent sufficient to reconstruct its behavior in the other counterfactuals. But this point of view suggests that this is not a necessary property of embedded agents, that counterfactuals might have independent content, other parts of a larger design.
For counterfactuals in decision theory, this cashes out as imperfect ability of an agent to know what it does in counterfactuals, or as coordination with other agents that have different designs in different counterfactuals, acausal trade across logical space. So there is essentially nothing new, the notion of “logical space” and of agents having extent in logical space adds up to normality, extending the title of a singular “agent” to a collective of agents with different designs that are mutually counterfactual and are engaged in acausal trade with each other, parts of the collective. It is natural to treat different parties engaged in acausal trade as parts of a whole since they interact and influence each other’s behavior. With sufficient integration, it becomes more central to call the whole collective “an agent” instead of privileging views that only focus on one part (counterfactual) at a time.
Logical space is an unusual notion of counterfactuals, because different points of a logical space can have a common logical future, that is different counterfactuals can contribute to the same future logical event, be in that event’s past. This is not surprising given acausal trade and predictors that ask what a given agent/computation does in multiple counterfactual situations. But it usefully runs counter to the impression that counterfactuals necessarily irrevocably diverge from each other, embed a mutual contradiction that prevents them from ever being reunited in a single possibility.