A Generalization of the Good Regulator Theorem

This post was written during the agent foundations fellowship with Alex Altair funded by the LTFF. Thanks to Alex for reading and commenting on the draft.

Abstract: We prove a version of the Good Regulator Theorem for a regulator with imperfect knowledge of its environment aiming to minimize the entropy of an output.

The Original Good Regulator Theorem

The original Good Regulator Theorem (from the 1970 paper of Conant and Ashby) concerns a setup involving three random variables: a ‘System’ , an ‘Output’ and a ‘Regulator’ . They are related by the following Bayes net:

The regulator receives an input from the system and then takes an action which is represented by the random variable . This action interacts with the system variable to produce the output . The function which maps the pair to is assumed to be deterministic. The aim of the regulator is to minimize the entropy of the output . In order to do this, it must use a ‘policy’ which is characterized by a conditional probability distribution . When discussing the regulator policy, we often anthropomorphize the regulator and talk about it ‘choosing’ actions, but this is just used as an intuitive way of discussing the conditional probability distribution . For example, we might say ‘the regulator will always choose action when presented with system state ’ which means that .

The theorem shows that a regulator which achieves the lowest output entropy and is not ‘unnecessarily complex’ must be a deterministic function of the System ie. the conditional probabilities must all be 0 or 1. If you want more details, I have written an explainer of the theorem.

The setup used in the original Good Regulator Theorem is restrictive in two important ways.

Firstly, it assumes that the regulator has perfect knowledge of the system state. In other words, it assumes that the regulator can assign an action (or probabilistic policy) for every possible system state . This means that the theorem does not cover cases where the regulator does not have full information about the system state. For example, if the system state was a bit string of length ten, but the regulator could only ‘see’ the first five bits of the string, this setup would not be covered by the original Good Regulator Theorem. For the setup to be covered by the original theorem, the regulator would have to be able to ‘see’ the full ten bits of the system and choose an action (or probabilistic mixture of actions) for each unique system string.

Secondly, the setup assumes that the output is a deterministic function of and , leaving no room for randomness anywhere in the whole Bayes net, except for the initial probability distribution over values of .

Both of these restrictions can be lifted by considering an alternative setup:

In this setup, the variable represents the part of the environment that the regulator can observe, which may not be the full environment. This variable feeds into the system state , but with some randomness, allowing for other, unobserved variables to affect the system state. Finally, the output is determined by and . Without loss of generality, we can assume that depends deterministically on and , since we could always extend the definition of to include some random noise. If we wish to explicitly represent some ‘noise’ variable feeding into , we could draw the diagram like this:

but we will keep the noise variable implicit by allowing for some randomness in the dependency.

Wentworth’s ‘Little Less Silly’ Good Regulator Theorem

Using the ‘imperfect knowledge’ framework described by the diagram above (where the regulator has knowledge of but not ), John Wentworth proved his ‘Little Less Silly’ version of the Good Regulator Theorem. Instead of considering entropy minimization as the target of optimization, he used expected utility maximization (where the utility function is defined over the outcomes ). He showed that, in this setup, a regulator which maximized expected utility and contained no unnecessary randomness was a deterministic function of the conditional posterior distribution [1]. In other words, the regulator’s action would be a deterministic function of , with the added condition that it would pick the same action for two -values and if they lead to the same posterior distribution of system values (ie. ). The regulator would only pick different actions for two -values if they lead to different posterior distributions. This allows you to say that a regulator which maximizes expected utility and is not unnecessarily complex is in some sense equivalent to a regulator which calculates (or ‘models’) the distribution .

Note that this theorem is behavioural; it concerns the relationship between the inputs and outputs of the regulator. It is not structural; ie. it does not tell us anything about what the internal structure of the regulator must be, except that the internal structure must be compatible with the input-output behaviour. I mention this because the Good Regulator Theorem is sometimes discussed in the context of the Agent-like Structure Problem which asks ‘does agent-like behaviour imply agent-like structure?‘. While the theorem might help us to characterise agent-like behaviour, on its own it does not tell us anything about structure. This is true of the original Good Regulator Theorem, Wentworth’s ‘little less silly’ version, and the version presented in this post.

In his post, after proving this theorem, John Wentworth then makes the following note:

Important note: I am not sure whether this result holds for minimum entropy. It is a qualitatively different problem, and in some ways more interesting—it’s more like an embedded agency problem, since decisions for one input-value can influence the optimal choice given other -values.

He then leaves this hanging before going on to prove the more complex result which people have have called the ‘Gooder Regulator’ theorem.

Here’s the issue he is pointing at, as I understand it. With utility maximization, one can choose a policy which maximizes expected utility in each ‘branch’ of possible outcomes and this policy will maximize the overall expected utility. However, for entropy minimization, this is not the case. You can ‘mix’ several low entropy distributions and get an overall distribution which is high entropy. This means that an entropy minimization policy must take into account its actions in other ‘branches’ in a way that utility maximization does not need to. Having said this, I don’t think that this is an insurmountable obstacle. After all, the original Good Regulator Theorem concerns entropy minimization and this setup is only a small adjustment to the one used in the original paper.

As far as I can tell, no-one has written down a Good Regulator Theorem for minimizing entropy with imperfect knowledge of the system state[2]. I think it’s do-able without running into any embedded-agency-like problems. Here’s my attempt.

Theorem Statement

Consider a regulator operating the in the ‘imperfect knowledge’ setup described above.

Call a regulator ‘good’ if it satisfies the two following criteria:

  • It achieves the minimum possible output entropy.

  • It contains no ‘unnecessary complexity’.

The term ‘unnecessary complexity’ is used by the authors of the original Good Regulator paper to mean unnecessary randomness. If a regulator randomizes between two actions which lead to the same result, then this is considered unnecessary randomness.

We will prove that a good regulator will be a deterministic function of its input . Furthermore, if two -values and lead to the same posterior distribution over , ie. , then a good regulator will choose the same action for those two inputs. In other words: a good regulator chooses a different -value for two -values only if the two -values lead to different posterior distributions over . We can say that such a regulator is a deterministic function of . In this sense, we can say that a good regulator is equivalent (in terms of input-output behaviour) to one which is modelling the distribution over system states .

Proof Sketch

This proof follows approximately the same structure as the original Good Regulator Theorem and is sketched out below.

Suppose we have a regulator which achieves the minimum possible entropy for some setup. We then prove the following: if, for a given -value (call it ) the regulator takes two different actions with non-zero probability (say and ), then both of those -values must lead to the same posterior distribution over , conditional on . A corollary of this is that if both of the -values do not lead to the same posterior distribution over , conditional on , then we can construct a regulator which achieves a lower entropy. Then, we invoke the requirement that a good regulator must contain no unnecessary complexity to show that a good regulator must be a deterministic function of its input . Finally, we show why a good regulator must be a deterministic function not only of , but of the posterior distribution . If a regulator is a deterministic function of but not of the posterior distribution, then it is either ‘unnecessarily complex’ or fails to achieve minimal entropy.

This is analogous to structure of the original Good Regulator Theorem. In the original paper, the main lemma states that if for a given -value an entropy-minimizing regulator takes two different actions with non-zero probability then both actions (along with that -value) lead to the same -value. The difference here is that is a probabilistic, not a deterministic function of (the regulator input) and (the regulator output).

The original Good Regulator authors make the assumption that the entropy-minimizing distribution of is unique to simplify the proof. As far as I can tell, this assumption is unnecessary and the proof presented below does not require it. The proof below shows something like: ‘If is not a deterministic function of , then it does not achieve minimal entropy. Furthermore if is not also a deterministic function of , then it is unnecessarily complex’. This statement is true even if there are multiple possible entropy-minimizing policies or -distributions

Proof

Suppose we have a regulator as described above which takes two different actions and each with non-zero probability when presented with the -value . We will not concern ourselves with exactly what the regulator does when presented with a different -value but we will assume it has a well-defined policy. We can write the overall probability distribution over outputs as:

Based on the Bayes Net of the setup, we can break down the to explicitly represent all variable dependencies:

where and is obtained by summing over -values:

with a similar expression for .

To reduce the size of our equations let us write and . We can then write the overall distribution as:

If we write as and as , we can now re-group this expression as follows:

This equation is cumbersome but has a fairly straightforward interpretation. The distribution over can be be viewed as a mixture of two probability distributions. With probability , is drawn from a distribution which itself is a mixture of and . With probability , is drawn from a distribution which is a mixture of and . To simplify equations, let us write:

This allows us to write the overall distribution as a mixture in a much clearer way:

Now, we invoke the concavity of entropy, which tells us that the overall entropy of must be greater than or equal to the weighted sum of the entropies of and . Recall is the probability that the regulator chooses action when presented with observation and that is to find the regulator policy which minimizes the output entropy of . With this in mind, we can consider a few different possibilities:

  • The entropy of is less than the entropy of . In this case, the output entropy is only minimized when . In other words, this means that the entropy is minimized when the regulator policy picks action with probability 1 when presented with input .

  • The entropy of is less than the entropy of . In this case, the output entropy is minimized when the regulator picks action with probability 1, when presented with .

  • The entropy of is equal to the entropy of , but the distributions are different. In this case, the output entropy is minimized by a deterministic policy which picks with probability 1 when presented with or a policy which picks with probability 1 when presented with . Either policy will achieve the minimum possible entropy, but any probabilistic mixture of the two will have a higher entropy.

  • The two distributions and are the same. In this case, any choice of will lead to the same output entropy.

By considering the four exhaustive cases above, we have therefore proved that that a regulator can only achieve minimum entropy by randomizing between actions if both of those actions lead to the same distribution over . Furthermore, we can enforce the condition that the regulator policy should contain no ‘unnecessary’ complexity or randomness. Then, even in the case where and are the same, the regulator would choose a deterministic policy.

Thus, we have proved that any regulator which achieves minimum entropy and contains no unnecessary randomness will be a deterministic function of its input . This is analogous to the original good regulator theorem.

In what sense is this a ‘model’?

The authors of the original Good Regulator Theorem claim that it shows that a regulator must be ‘modelling’ its environment. In what sense is this true for the theorem we have just proved?

Firstly, there is the (slightly trivial) sense in which the regulator must be taking in information from its environment and using it to choose an action. We could think of this as the regulator modelling the part of the environment represented by the variable . But this is not very satisfying, since the deterministic function which the ‘good’ regulator applies may be very trivial (for example, in some setups a ‘good’ regulator might just be a function which maps all -values to the same regulator output).

Then, in closer analogy to John Wentworth’s work we could say that a good regulator is equivalent to a regulator which is a deterministic function of the posterior probability distribution . In other words, a good regulator is equivalent to a one which, upon observing , calculates the distribution and throws away information about and makes its decision purely based on the distribution . We will unpack this idea a little bit more in the next section.

A Deterministic Function of the Posterior Distribution

We have already shown that a good regulator is a deterministic function of is input . Furthermore, the posterior distribution is also a deterministic function[3] of (though itself is not necessarily a deterministic function of ). Now, suppose that the regulator was a deterministic function of , but not equivalent to a deterministic function of . The only way this could be the case would be if there are some -values which lead to the same posterior distributions (eg. ) but the regulator chooses different -values for each -value. If choosing different -values for each -value with the same posterior -distribution leads to different output -distribution, then this regulator cannot be optimal, due to a concavity of entropy argument, similar to the one in the above section. If choosing different -values for each -value with the same posterior -distribution does not lead to different output -distributions, then we can argue that choosing different -values constitutes ‘unnecessary complexity’ and require that a ‘good’ regulator chooses one -value consistently for upon observing both and . This regulator would therefore be equivalent to one which received as a input and calculated/​modelled the distribution and then made its decision purely based on this distribution, instead of the value it received.

  1. ^

    In this post we will use the phrase ‘posterior distribution’ to refer to the conditional probability distribution . This follows John Wentworth’s use of this phrase in his Good Regulator Theorem post. The word ‘posterior’ is usually used in the context of Bayesian updating and has the connotation that the regulator is ‘updating its belief’ about the system distribution, given the observation . While it is certainty possible that the regulator has a ‘belief’ which it is ‘updating’, this is not necessary for the theorem to hold. You can safely think of the ‘posterior’ distribution as just a conditional distribution without missing anything important.

  2. ^

    John Wentworth correctly points out that the proof of the original Good Regulator Theorem (with entropy minimization) holds for an imperfect knowledge setup like this, provided that is a deterministic function of (ie. contains all information needed to perfectly reconstruct ). This is true, but does not cover the more interesting case, covered here, where cannot be fully reconstructed from alone. From ‘Fixing the Good Regulator Theorem’:

    The whole proof actually works just fine with these two assumptions, and I think this is what Conant & Ashby originally intended. The end result is that the regulator output must be a deterministic function of , even if the regulator only takes as input, not itself (assuming is a deterministic function of , i.e. the regulator has enough information to perfectly reconstruct ).

  3. ^

    There are a couple of functions with different type signatures discussed in this paragraph so I’ll briefly clarify them here. When I say that the distribution is a function of , what I mean is that there is a function whose domain contains all possible -values and whose output is the distribution . Later, when we talk about being a deterministic function of the distribution , this means that where the domain of includes the distributions for all and the range of is the set of possible -values (or ‘actions’).