absolutely right to point out that the original formulation of the Good Regulator Theorem (Conant & Ashby, 1970) states that:
“Every good regulator of a system must be a model of that system,” formalized as a deterministic mapping h: S \rightarrow Rh:S→R, from the states of the system to the states of the regulator.
Strictly speaking, this does not require embeddedness in the physical sense—it is a general result about control systems and model adequacy. The theorem makes no claim that the regulator must be physically located within the system it regulates.
However, in the context of cognitive systems (like the brain) and self-referential agents, I am extending the logic and implications of the theorem beyond its original formulation, in a way that remains consistent with its spirit.
When the regulator is part of the system it regulates (i.e., is embedded or self-referential)—as is the case with the human brain modeling itself—then the mapping h: S \→ Rh:S→R becomes reflexive. The regulator must model not only the external system but itself as a subsystem.
This recursive modeling introduces self-reference and semantic closure, which—when the system is sufficiently expressive (as in symbolic thought)—leads directly to Gödelian incompleteness. That is, no such regulator can fully model or verify all truths about itself while remaining consistent.
So while the original theorem only requires that a good regulator be a model, I am exploring what happens when the regulator models itself, and how that logically leads to structural incompleteness, subjective illusions, and the emergence of unprovable constructs like qualia.
Yes, you’re absolutely right to point out that this raises an important issue — one that must be addressed, and yet cannot be resolved in the conventional sense. But this is not a weakness in the argument; in fact, it is precisely the point.
To model itself completely, the map would have to include a representation of itself, which would include a representation of that representation, and so on — collapsing into paradox or incompleteness.
This isn’t just a practical limitation. It’s a structural impossibility.
So when we extend the Good Regulator Theorem to embedded regulators — like the brain modeling itself — we don’t just encounter technical difficulty, we hit the formal boundary of self-representation. No system can fully model its own structure and remain both consistent and complete.
But you must ask yourself: Would it be worse regulator? Def. not.
absolutely right to point out that the original formulation of the Good Regulator Theorem (Conant & Ashby, 1970) states that:
Strictly speaking, this does not require embeddedness in the physical sense—it is a general result about control systems and model adequacy. The theorem makes no claim that the regulator must be physically located within the system it regulates.
However, in the context of cognitive systems (like the brain) and self-referential agents, I am extending the logic and implications of the theorem beyond its original formulation, in a way that remains consistent with its spirit.
When the regulator is part of the system it regulates (i.e., is embedded or self-referential)—as is the case with the human brain modeling itself—then the mapping h: S \→ Rh:S→R becomes reflexive. The regulator must model not only the external system but itself as a subsystem.
This recursive modeling introduces self-reference and semantic closure, which—when the system is sufficiently expressive (as in symbolic thought)—leads directly to Gödelian incompleteness. That is, no such regulator can fully model or verify all truths about itself while remaining consistent.
So while the original theorem only requires that a good regulator be a model, I am exploring what happens when the regulator models itself, and how that logically leads to structural incompleteness, subjective illusions, and the emergence of unprovable constructs like qualia.
Yes, you’re absolutely right to point out that this raises an important issue — one that must be addressed, and yet cannot be resolved in the conventional sense. But this is not a weakness in the argument; in fact, it is precisely the point.
This isn’t just a practical limitation. It’s a structural impossibility.
So when we extend the Good Regulator Theorem to embedded regulators — like the brain modeling itself — we don’t just encounter technical difficulty, we hit the formal boundary of self-representation. No system can fully model its own structure and remain both consistent and complete.
But you must ask yourself: Would it be worse regulator? Def. not.