I wouldn’t get too hung up on the word ‘regulator’. It’s used in a very loose way here, as in common in old cybernetics-flavoured papers.
Human slop (I’m referring to those old cybernetics papers rather than the present discussion) has no more to recommend it than AI slop. “Humans Who Are Not Concentrating Are Not General Intelligences”, and that applies not just to how they read but also how they write.
If you are thinking of something like ‘R must learn a strategy by trying out actions and observing their effect on Z’ then this is beyond the scope of this post! The Good Regulator Theorem(s) concern optimal behaviour, not how that behaviour is learned.
What I am thinking of (as always when this subject comes up) is control systems. A room thermostat actually regulates, not merely “regulates”, the temperature of a room, at whatever value the user has set, without modelling or learning anything. It, and all of control theory (including control systems that do model or adapt), fall outside the scope of the supposed Good Regulator Theorem. Hence my asking for a practical example of something that it does apply to.
Short Answer: It’s a toy model. I don’t think I can come up with a practical example which would address all of your issues.
Long Answer, which I think gets at what we disagree about:
I think we are approaching this from different angles. I am interested in the GRT from an agent foundations point of view, not because I want to make better thermostats. I’m sure that GRT is pretty useless for most practical applications of control theory! I read John Wentworth’s post where he suggested that the entropy-reduction problem may lead to embedded-agency problems. Turns out it doesn’t but it would have been cool if it did! I wanted to tie up that loose end from John’s post.
Why do I care about entropy reduction at all?
I’m interested in ‘optimization’, as it pertains to the agent-like structure problem, and optimization is closely related to entropy reduction, so this seemed like an interesting avenue to explore.
Reducing entropy can be thought of as one ‘component’ of utility maximization, so it’s interesting from that point of view.
Reducing entropy is often a necessary (but not sufficient) condition for achieving goals. A thermostat can achieve an average temperature of 25C by ensuring that the room temperature comes from a uniform distribution over all temperatures between 75C and −25C. But a better thermostat will ensure that the temperature is distributed over a narrower (lower entropy) distribution around 25C .
I think we probably agree that the Good Regulator Theorem could have a better name (the ‘Good Entropy-Reducer Theorem’?). But unfortunately, the result is most commonly known using the name ‘Good Regulator Theorem’. It seems to me that 55 years after the original paper was published, it is too late to try to re-brand.
I decided to use that name (along with the word ‘regulator’) so that readers would know which theorem this post is about. To avoid confusion, I made sure to be clear (right in the first few paragraphs) about the specific way that I was using the word ‘regulator’. This seems like a fine compromise to me.
Human slop (I’m referring to those old cybernetics papers rather than the present discussion) has no more to recommend it than AI slop. “Humans Who Are Not Concentrating Are Not General Intelligences”, and that applies not just to how they read but also how they write.
What I am thinking of (as always when this subject comes up) is control systems. A room thermostat actually regulates, not merely “regulates”, the temperature of a room, at whatever value the user has set, without modelling or learning anything. It, and all of control theory (including control systems that do model or adapt), fall outside the scope of the supposed Good Regulator Theorem. Hence my asking for a practical example of something that it does apply to.
Regarding your request for a practical example.
Short Answer: It’s a toy model. I don’t think I can come up with a practical example which would address all of your issues.
Long Answer, which I think gets at what we disagree about:
I think we are approaching this from different angles. I am interested in the GRT from an agent foundations point of view, not because I want to make better thermostats. I’m sure that GRT is pretty useless for most practical applications of control theory! I read John Wentworth’s post where he suggested that the entropy-reduction problem may lead to embedded-agency problems. Turns out it doesn’t but it would have been cool if it did! I wanted to tie up that loose end from John’s post.
Why do I care about entropy reduction at all?
I’m interested in ‘optimization’, as it pertains to the agent-like structure problem, and optimization is closely related to entropy reduction, so this seemed like an interesting avenue to explore.
Reducing entropy can be thought of as one ‘component’ of utility maximization, so it’s interesting from that point of view.
Reducing entropy is often a necessary (but not sufficient) condition for achieving goals. A thermostat can achieve an average temperature of 25C by ensuring that the room temperature comes from a uniform distribution over all temperatures between 75C and −25C. But a better thermostat will ensure that the temperature is distributed over a narrower (lower entropy) distribution around 25C .
I think we probably agree that the Good Regulator Theorem could have a better name (the ‘Good Entropy-Reducer Theorem’?). But unfortunately, the result is most commonly known using the name ‘Good Regulator Theorem’. It seems to me that 55 years after the original paper was published, it is too late to try to re-brand.
I decided to use that name (along with the word ‘regulator’) so that readers would know which theorem this post is about. To avoid confusion, I made sure to be clear (right in the first few paragraphs) about the specific way that I was using the word ‘regulator’. This seems like a fine compromise to me.