The room thermostat is plenty robust, but owes nothing to Robust Control. Or to put that differently, Robust Control means “control that works”.
But it still needs to have (correspond to) a model of regulating temperature.
The designer needs that, but the controller does not.
You can’t use the same controller to control a balancing pole (or a plane’s flaps).
The designer considers the dynamics of the pole and designs a controller for it. The controller need not have any model. Here’s a simple example. The inverted pendulum controller there has a certain architecture with 4 parameters chosen to suitably place the poles of the transfer function. It stretches the concept of “model” to call those parameters a model of the inverted pendulum. For the walking robot example, I didn’t even do that calculation, just picked parameters from physical intuition. It did not take much trial and error to get it to work.
I found one of the flaws he describes particularly striking, the empirically observed “bursting” phenomenon, whereby under adaptive control the plant may occasionally go into unstable oscillations for a short while. This happens because you cannot observe the full space of a plant’s behaviour while it is under control, only some subspace of it. That is what a control loop does to a plant: it removes some of its degrees of freedom. (For example, the room thermostat removes all degrees of freedom from the room’s temperature.) The adaptive part is trying to learn a model of the plant’s behaviour while the plant is being controlled. But if the number of degrees of freedom while under control is less than the number of parameters the adaptive part is estimating, then some degrees of freedom of the parameter space are unobservable. Those degrees of freedom cannot be learned, and are free to drift arbitrarily. Eventually they drift so far that control fails, the plant exhibits more degrees of freedom, and the adaptive part manages to learn its parameters better. Control is restored, until the next time.
That phenomenon happens for such fundamental reasons that I would expect there to even be theorems about it, but I’m not familiar enough with the field to know. The behaviours of the plant under control and not under control are completely different. It is difficult to learn a model of the latter while only having access to the former. One might even say “you can’t tell what the plant is doing by watching what it’s doing”.
What do you think will happen as the number of degrees of freedom goes up significantly?
I don’t think any of the points at issue here will be affected. The plant and the controller may both be more complicated, but the issue of whether a given controller has a model or not is unchanged.
What, then, is a model? This is where people broaden the scope of the word far beyond its normal use, apparently in order to maintain the claim that a good controller must have a “model”. But changing the meaning of a word changes the meaning of every sentence that uses it. It does not change the things that the sentences are talking about, nor the claims made when using the original sense of the word.
That original sense, the ordinary sense of the word whenever anyone uses it, in the absence of any urge to find a model whether it exists or not, is illustrated by the following examples.
In a typical textbook on Model-Based Control, the block diagrams of the controllers it discusses include a block explicitly labelled as a model of the plant. This block calculates a function from inputs to outputs, in a way that is close to the input-output behaviour of the plant being modelled. In non-model-based control there is no such component.
That is a special case of a mathematical model: a set of variables and equations relating them that describe the behaviour of a physical system.
A physical model is the same thing done with physics instead of mathematics, such as a scaled-down model of an aircraft wing in a wind tunnel.
In biology, a model organism is an organism that is representative of some larger class, to which class experimental results may be expected to extrapolate. A practical model must also be easy to work with, hence the ubiquity of Drosophila, Arabidopsis, and mice.
Even a model on the catwalk is the same sort of thing. The model displays clothes as they are intended to appear when worn by potential customers. Protests some years back about a tendency for female models to look more like waifish teenage boys make the point: those models were not very good models, in the sense I’m talking about.
These are all examples of a single concept. Here are some anti-examples. A keycard is not a model of the set of doors that it opens. A password is not a model of the account it gives access to. A table is not a model of whatever might be placed on it. An eye is not a model of the photons it responds to.
Perhaps some concise formula can be given that draws the line in the right place, but I do not have one to hand. I suppose that categories and adjoint functors might be a part of it.
I have found your old post Without models. You work from a very clear understanding of what a model is, and a thermostat doesn’t have it, and with the definition of a model that you seem to use, I agree.
I think people may mean two things when they talk about a “model”:
An abstract representation of something that some entity can reason about. You mention both physical and software structures that are embedded in the larger system and are operated on (interpreted, evaluated, measured) and influence the larger system (to control it).
A part of the system that represents future states. Mathematically speaking, the factorized part of the system’s state space correlates more with future states of the system than current states of the system (over some time intervals of interest).
These overlap. Think of an explicit model component that is fed input from the environment (the controlled process) and outputs predicted future states. This model will have outputs that highly correlate with the component of the state-space of the environment in the future.
More examples:
A model of the Earth, i.e., a globe, is a model in sense 1 but not in sense 2 because it is an abstraction of the real Earth, and you can be reason about it. But no part of it corresponds to the future state of the Earth.
A mathematical model in the head of an engineer is a model in sense 1 but not in sense 2 unless it includes the application of the model to imaginary inputs from the real world. In that latter case, to the degree the outputs correspond to actual future states, it is also a model in sense 2.
A feedforward circuit in a controller that calculates the effect of a disturbance on the output of a process is a model in sense 2 but not in sense 1 because it is not a separate entity that you can reason about, but its output still correlates with the future state of the process.
A beauty contest model might be a model in sense 2 if its look correlates with the looks of other persons in the future.
A thermostat is no model in sense 1 but you will find that its state-space has a component that corresponds to some future states (at least if it is not a simple P-controller).
The room thermostat is plenty robust, but owes nothing to Robust Control. Or to put that differently, Robust Control means “control that works”.
The designer needs that, but the controller does not.
The designer considers the dynamics of the pole and designs a controller for it. The controller need not have any model. Here’s a simple example. The inverted pendulum controller there has a certain architecture with 4 parameters chosen to suitably place the poles of the transfer function. It stretches the concept of “model” to call those parameters a model of the inverted pendulum. For the walking robot example, I didn’t even do that calculation, just picked parameters from physical intuition. It did not take much trial and error to get it to work.
Very interesting paper! Thank you for that!
I found one of the flaws he describes particularly striking, the empirically observed “bursting” phenomenon, whereby under adaptive control the plant may occasionally go into unstable oscillations for a short while. This happens because you cannot observe the full space of a plant’s behaviour while it is under control, only some subspace of it. That is what a control loop does to a plant: it removes some of its degrees of freedom. (For example, the room thermostat removes all degrees of freedom from the room’s temperature.) The adaptive part is trying to learn a model of the plant’s behaviour while the plant is being controlled. But if the number of degrees of freedom while under control is less than the number of parameters the adaptive part is estimating, then some degrees of freedom of the parameter space are unobservable. Those degrees of freedom cannot be learned, and are free to drift arbitrarily. Eventually they drift so far that control fails, the plant exhibits more degrees of freedom, and the adaptive part manages to learn its parameters better. Control is restored, until the next time.
That phenomenon happens for such fundamental reasons that I would expect there to even be theorems about it, but I’m not familiar enough with the field to know. The behaviours of the plant under control and not under control are completely different. It is difficult to learn a model of the latter while only having access to the former. One might even say “you can’t tell what the plant is doing by watching what it’s doing”.
I guess we talk past each other when we use the word “model” here somehow but at least we seem to agree on what happens and why in these examples.
What do you think will happen as the number of degrees of freedom goes up significantly?
I don’t think any of the points at issue here will be affected. The plant and the controller may both be more complicated, but the issue of whether a given controller has a model or not is unchanged.
What, then, is a model? This is where people broaden the scope of the word far beyond its normal use, apparently in order to maintain the claim that a good controller must have a “model”. But changing the meaning of a word changes the meaning of every sentence that uses it. It does not change the things that the sentences are talking about, nor the claims made when using the original sense of the word.
That original sense, the ordinary sense of the word whenever anyone uses it, in the absence of any urge to find a model whether it exists or not, is illustrated by the following examples.
In a typical textbook on Model-Based Control, the block diagrams of the controllers it discusses include a block explicitly labelled as a model of the plant. This block calculates a function from inputs to outputs, in a way that is close to the input-output behaviour of the plant being modelled. In non-model-based control there is no such component.
That is a special case of a mathematical model: a set of variables and equations relating them that describe the behaviour of a physical system.
A physical model is the same thing done with physics instead of mathematics, such as a scaled-down model of an aircraft wing in a wind tunnel.
In biology, a model organism is an organism that is representative of some larger class, to which class experimental results may be expected to extrapolate. A practical model must also be easy to work with, hence the ubiquity of Drosophila, Arabidopsis, and mice.
Even a model on the catwalk is the same sort of thing. The model displays clothes as they are intended to appear when worn by potential customers. Protests some years back about a tendency for female models to look more like waifish teenage boys make the point: those models were not very good models, in the sense I’m talking about.
These are all examples of a single concept. Here are some anti-examples. A keycard is not a model of the set of doors that it opens. A password is not a model of the account it gives access to. A table is not a model of whatever might be placed on it. An eye is not a model of the photons it responds to.
Perhaps some concise formula can be given that draws the line in the right place, but I do not have one to hand. I suppose that categories and adjoint functors might be a part of it.
I have found your old post Without models. You work from a very clear understanding of what a model is, and a thermostat doesn’t have it, and with the definition of a model that you seem to use, I agree.
I think people may mean two things when they talk about a “model”:
An abstract representation of something that some entity can reason about. You mention both physical and software structures that are embedded in the larger system and are operated on (interpreted, evaluated, measured) and influence the larger system (to control it).
A part of the system that represents future states. Mathematically speaking, the factorized part of the system’s state space correlates more with future states of the system than current states of the system (over some time intervals of interest).
These overlap. Think of an explicit model component that is fed input from the environment (the controlled process) and outputs predicted future states. This model will have outputs that highly correlate with the component of the state-space of the environment in the future.
More examples:
A model of the Earth, i.e., a globe, is a model in sense 1 but not in sense 2 because it is an abstraction of the real Earth, and you can be reason about it. But no part of it corresponds to the future state of the Earth.
A mathematical model in the head of an engineer is a model in sense 1 but not in sense 2 unless it includes the application of the model to imaginary inputs from the real world. In that latter case, to the degree the outputs correspond to actual future states, it is also a model in sense 2.
A feedforward circuit in a controller that calculates the effect of a disturbance on the output of a process is a model in sense 2 but not in sense 1 because it is not a separate entity that you can reason about, but its output still correlates with the future state of the process.
A beauty contest model might be a model in sense 2 if its look correlates with the looks of other persons in the future.
A thermostat is no model in sense 1 but you will find that its state-space has a component that corresponds to some future states (at least if it is not a simple P-controller).