Edit: I now see that this argument was making an unnecessary assumption that the markov blankets in question would have to relate nicely to a causal model; see John’s comment.
Friston has famously invoked the idea of Markov Blankets for representing agent boundaries, in arguments related to the Free Energy Principle / Active Inference. The Emperor’s New Markov Blankets by Jelle Bruineberg competently critiques the way Friston tries to use Markov blankets. But some other unrelated theories also try to apply Markov blankets to represent agent boundaries. There is a simple reason why such approaches are doomed.
This argument is due to Sam Eisenstat.
Consider the data-type of a Markov blanket. You start with a probabilistic graphical model (usually, a causal DAG), which represents the world.
A “Markov blanket” is a set of nodes in this graph, which probabilistically insulates one part of the graph (which we might call the part “inside” the blanket) from another part (“outside” the blanket):[1]
(“Probabilistically insulates” means that the inside and outside are conditionally independent, given the Markov blanket.)
So the obvious problem with this picture of an agent boundary is that it only works if the agent takes a deterministic path through space-time. We can easily draw a Markov blanket around an “agent” who just says still, or who moves with a predictable direction and speed:
But if an agent’s direction and speed are ever sensitive to external stimuli (which is a property common to almost everything we might want to call an ‘agent’!) we cannot draw a markov blanket such that (a) only the agent is inside, and (b) everything inside is the agent:
It would be a mathematical error to say “you don’t know where to draw the Markov blanket, because you don’t know which way the Agent chooses to go”—a Markov blanket represents a probabilistic fact about the model without any knowledge you possess about values of specific variables, so it doesn’t matter if you actually do know which way the agent chooses to go.[2]
The only way to get around this (while still using Markov blankets) would be to construct your probabilistic graphical model so that one specific node represents each observer-moment of the agent, no matter where the agent physically goes.[3] In other words, start with a high-level model of reality which already contains things like agents, rather than a low-level purely physical model of reality. But then you don’t need Markov blankets to help you point out the agents. You’ve already got something which amounts to a node labeled “you”.
I don’t think it is impossible to specify a mathematical model of agent boundaries which does what you want here, but Markov blankets ain’t it.
Drawing Markov blankets wouldn’t even make sense in a model that’s been updated with complete info about the world’s state; if you know the values of the variables, then everything is trivially probabilistically independent of everything else anyway, since known information won’t change your mind about known information. So any subset would be a Markov blanket.
Or you could have a more detailed model, such as one node per neuron; that would also work fine. But the problem remains the same; you can only draw such a model if you already understand your agent as a coherent object, in which case you don’t need Markov blankets to help you draw a boundary around it.
Agent Boundaries Aren’t Markov Blankets. [Unless they’re non-causal; see comments.]
Edit: I now see that this argument was making an unnecessary assumption that the markov blankets in question would have to relate nicely to a causal model; see John’s comment.
Friston has famously invoked the idea of Markov Blankets for representing agent boundaries, in arguments related to the Free Energy Principle / Active Inference. The Emperor’s New Markov Blankets by Jelle Bruineberg competently critiques the way Friston tries to use Markov blankets. But some other unrelated theories also try to apply Markov blankets to represent agent boundaries. There is a simple reason why such approaches are doomed.
This argument is due to Sam Eisenstat.
Consider the data-type of a Markov blanket. You start with a probabilistic graphical model (usually, a causal DAG), which represents the world.
A “Markov blanket” is a set of nodes in this graph, which probabilistically insulates one part of the graph (which we might call the part “inside” the blanket) from another part (“outside” the blanket):[1]
(“Probabilistically insulates” means that the inside and outside are conditionally independent, given the Markov blanket.)
So the obvious problem with this picture of an agent boundary is that it only works if the agent takes a deterministic path through space-time. We can easily draw a Markov blanket around an “agent” who just says still, or who moves with a predictable direction and speed:
But if an agent’s direction and speed are ever sensitive to external stimuli (which is a property common to almost everything we might want to call an ‘agent’!) we cannot draw a markov blanket such that (a) only the agent is inside, and (b) everything inside is the agent:
It would be a mathematical error to say “you don’t know where to draw the Markov blanket, because you don’t know which way the Agent chooses to go”—a Markov blanket represents a probabilistic fact about the model without any knowledge you possess about values of specific variables, so it doesn’t matter if you actually do know which way the agent chooses to go.[2]
The only way to get around this (while still using Markov blankets) would be to construct your probabilistic graphical model so that one specific node represents each observer-moment of the agent, no matter where the agent physically goes.[3] In other words, start with a high-level model of reality which already contains things like agents, rather than a low-level purely physical model of reality. But then you don’t need Markov blankets to help you point out the agents. You’ve already got something which amounts to a node labeled “you”.
I don’t think it is impossible to specify a mathematical model of agent boundaries which does what you want here, but Markov blankets ain’t it.
Although it’s arbitrary which part we call inside vs outside.
Drawing Markov blankets wouldn’t even make sense in a model that’s been updated with complete info about the world’s state; if you know the values of the variables, then everything is trivially probabilistically independent of everything else anyway, since known information won’t change your mind about known information. So any subset would be a Markov blanket.
Or you could have a more detailed model, such as one node per neuron; that would also work fine. But the problem remains the same; you can only draw such a model if you already understand your agent as a coherent object, in which case you don’t need Markov blankets to help you draw a boundary around it.