Suppose I am in the presence of a bunch of data going this way and that into and out of a bunch of black boxes. What kind of math or statistics might tell me or suggest to me that boxes 2, 7, and 32 are probably simple control systems with properties x, y, and z? Seems I should be looking for a function of the inputs that is “surprisingly” approximately constant, and if there’s a simple map from that function’s output to states of some subset of the outputs then we’ve got a very strong clue, or if we find that some output strongly negatively correlates with a seemingly unrelated time series somewhere else that might be a clue… Anyone have a link to a good paper on this?
Seems I should be looking for a function of the inputs that is “surprisingly” approximately constant
I think in most situations where you don’t have internal observations of the various actors, it’s more likely that outputs will be constant than a function of the inputs. That is, a control system adjusts the relationship between an input and an output, often by counteracting it completely—thus we would see the absence of a relationship that we would normally expect to see. (But if we don’t know what we would normally expect, then we have trouble.)
Anyone have a link to a good paper on this?
I’m leaning pretty heavily on a single professor/concept for this answer, but there’s a phrase called “Milton Friedman’s Thermostat,” perhaps best explained here (which also has a few links for going further down the trail):
If a house has a good thermostat, we should observe a strong negative correlation between the amount of oil burned in the furnace (M), and the outside temperature (V). But we should observe no correlation between the amount of oil burned in the furnace (M) and the inside temperature (P). And we should observe no correlation between the outside temperature (V) and the inside temperature (P).
An econometrician, observing the data, concludes that the amount of oil burned had no effect on the inside temperature. Neither did the outside temperature. The only effect of burning oil seemed to be that it reduced the outside temperature. An increase in M will cause a decline in V, and have no effect on P.
A second econometrician, observing the same data, concludes that causality runs in the opposite direction. The only effect of an increase in outside temperature is to reduce the amount of oil burned. An increase in V will cause a decline in M, and have no effect on P.
But both agree that M and V are irrelevant for P. They switch off the furnace, and stop wasting their money on oil.
They also give another example with a driver adjusting how much to press the gas pedal based on hills here, along with a few ideas on how to discover the underlying relationships.
I feel like it’s worth mentioning the general project of discovering causality (my review of Pearl, Eliezer’s treatment), but that seems like it’s going in the reverse direction. If a controller is deleting correlations from your sense data, that makes discovering causality harder, and it seems difficult to say “aha, causality is harder to discover than normal, therefore there are controllers!”, but that might actually be effective.
If a controller is deleting correlations from your sense data, that makes discovering causality harder, and it seems difficult to say “aha, causality is harder to discover than normal, therefore there are controllers!”, but that might actually be effective.
Yes, in the PCT field this is called the Test for the Controlled Variable. Push on a variable, and if it does not change, and it doesn’t appear to be nailed down, there’s probably a control system there.
I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it’s been turned down by two journals so far, and I’m not sure I can be bothered to rewrite it again.
I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it’s been turned down by two journals so far, and I’m not sure I can be bothered to rewrite it again.
I looked at arXiv, but there’s still a gateway process. It’s less onerous than passing referee scrutiny, but still involves getting someone else with sufficient reputation on arXiv to ok it. As far as I know, no-one in my university department or in the research institute I work at has ever published anything there. I have accounts on researchgate and academia.edu, so I could stick it there.
I have never had any issues putting things up on the arXiv (just have to get through their latex process, which has some wrinkles). I think I have seen a draft of your paper, and I don’t see how arXiv would have an issue with it. Did arXiv reject your draft somehow?
I haven’t sent it there. I created an account on arXiv a while back, and as far as I recall there was some process requiring a submission from someone new to be endorsed by someone else. This, I think, although on rereading I see that it only says that they “may” post facto require endorsement of submissions by authors new to arXiv, it’s not a required part of the submission process. What happened the very first time you put something there?
(I know I’m not IlyaShpitser, but better my reply than no reply.) I have several papers on the arXiv, and the very first time I submitted one I remember it being automatically posted without needing endorsement (and searching my inbox confirms that; there’s no extra email there asking me to find an endorser). If you submit a not-obviously-cranky-or-offtopic preprint from a university email address I expect it to sail right through.
Because they’re so small, I feel like their policies can be really inconsistent from circumstance to circumstance. I’ve got a couple papers on arXiv, but my third one has been mysteriously on hold for some months now for reasons that are entirely unclear to me.
(I know I’m not IlyaShpitser, but better my reply than no reply.) I have several papers on the arXiv, and the very first time I submitted one I remember it being automatically posted without needing endorsement
How long ago was this? I believe the endorsement for new submitters requirement was added ~6 years ago.
I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it’s been turned down by two journals so far, and I’m not sure I can be bothered to rewrite it again.
I’d be interested in seeing it, if you don’t mind! (My email is my username at gmail, or you can contact me any of the normal ways.)
That is, a control system adjusts the relationship between an input and an output, often by counteracting it completely—thus we would see the absence of a relationship that we would normally expect to see.
The words “input” and “output” are not right here. A controller has two signals coming into it and one coming out of it. What you above called the “output” is actually one of the input signals, the perception. This is fundamental to understanding control systems.
The two signals going into the controller are the reference and the perception. The reference is the value at which the control system is trying to bring the perception to. The signal coming out of the controller is the output, action or behaviour of the controller. The action is being emitted in order to bring the perception towards the reference. The controller is controlling the relationship between its two input signals, trying to make that relationship the identity. The italicised words are somewhere between definitions and descriptions. They are the usual words used to name these signals in PCT, but this usage is an instance of their everyday meanings.
In concrete terms, a thermostat’s perception is (some measure of) the actual temperature. Its reference signal is the setting of the desired temperature on a dial. Its output or behaviour is the signal it sends to turn the heat source on and off. In a well-functioning control system, one observes that as the reference changes, the perception tracks it very closely, while the output signal has zero correlation with both of them. The purpose of the behaviour is to control the perception—hence the title of William Powers’ book, “Behavior: The Control of Perception”. All of the behaviour of living organisms is undertaken for a purpose: to bring some perception close to some reference.
The words “input” and “output” are not right here.
Yeah, that paragraph was sloppy and the previous sentence didn’t add much, so I deleted it and reworded the sentence you quoted. I’m used to flipping my perspective around a system, and thus ‘output’ and ‘input’ are more like ‘left’ and ‘right’ to me than invariant relationships like ‘clockwise’ and ‘counterclockwise’—with the result that I’ll sometimes be looking at something from the opposite direction of someone else. “Left! No, house left!”
(In this particular case, the system output and the controller input are the same thing, and the system input is the disturbance that the controller counteracts, and I assumed you didn’t have access to the controller’s other input, the reference.)
Suppose I am in the presence of a bunch of data going this way and that into and out of a bunch of black boxes. What kind of math or statistics might tell me or suggest to me that boxes 2, 7, and 32 are probably simple control systems with properties x, y, and z? Seems I should be looking for a function of the inputs that is “surprisingly” approximately constant, and if there’s a simple map from that function’s output to states of some subset of the outputs then we’ve got a very strong clue, or if we find that some output strongly negatively correlates with a seemingly unrelated time series somewhere else that might be a clue… Anyone have a link to a good paper on this?
I think in most situations where you don’t have internal observations of the various actors, it’s more likely that outputs will be constant than a function of the inputs. That is, a control system adjusts the relationship between an input and an output, often by counteracting it completely—thus we would see the absence of a relationship that we would normally expect to see. (But if we don’t know what we would normally expect, then we have trouble.)
I’m leaning pretty heavily on a single professor/concept for this answer, but there’s a phrase called “Milton Friedman’s Thermostat,” perhaps best explained here (which also has a few links for going further down the trail):
They also give another example with a driver adjusting how much to press the gas pedal based on hills here, along with a few ideas on how to discover the underlying relationships.
I feel like it’s worth mentioning the general project of discovering causality (my review of Pearl, Eliezer’s treatment), but that seems like it’s going in the reverse direction. If a controller is deleting correlations from your sense data, that makes discovering causality harder, and it seems difficult to say “aha, causality is harder to discover than normal, therefore there are controllers!”, but that might actually be effective.
Yes, in the PCT field this is called the Test for the Controlled Variable. Push on a variable, and if it does not change, and it doesn’t appear to be nailed down, there’s probably a control system there.
I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it’s been turned down by two journals so far, and I’m not sure I can be bothered to rewrite it again.
arXiv?
I looked at arXiv, but there’s still a gateway process. It’s less onerous than passing referee scrutiny, but still involves getting someone else with sufficient reputation on arXiv to ok it. As far as I know, no-one in my university department or in the research institute I work at has ever published anything there. I have accounts on researchgate and academia.edu, so I could stick it there.
I have never had any issues putting things up on the arXiv (just have to get through their latex process, which has some wrinkles). I think I have seen a draft of your paper, and I don’t see how arXiv would have an issue with it. Did arXiv reject your draft somehow?
I haven’t sent it there. I created an account on arXiv a while back, and as far as I recall there was some process requiring a submission from someone new to be endorsed by someone else. This, I think, although on rereading I see that it only says that they “may” post facto require endorsement of submissions by authors new to arXiv, it’s not a required part of the submission process. What happened the very first time you put something there?
(I know I’m not IlyaShpitser, but better my reply than no reply.) I have several papers on the arXiv, and the very first time I submitted one I remember it being automatically posted without needing endorsement (and searching my inbox confirms that; there’s no extra email there asking me to find an endorser). If you submit a not-obviously-cranky-or-offtopic preprint from a university email address I expect it to sail right through.
Well, I’ve just managed to put a paper up on arXiv (a different one that’s been in the file drawer for years), so that works.
Because they’re so small, I feel like their policies can be really inconsistent from circumstance to circumstance. I’ve got a couple papers on arXiv, but my third one has been mysteriously on hold for some months now for reasons that are entirely unclear to me.
How long ago was this? I believe the endorsement for new submitters requirement was added ~6 years ago.
My first submission was in 2012. I’m fairly sure I read about the potential endorsement-for-new-submitters condition at the time, too.
SSRN?
I’d be interested in seeing it, if you don’t mind! (My email is my username at gmail, or you can contact me any of the normal ways.)
The words “input” and “output” are not right here. A controller has two signals coming into it and one coming out of it. What you above called the “output” is actually one of the input signals, the perception. This is fundamental to understanding control systems.
The two signals going into the controller are the reference and the perception. The reference is the value at which the control system is trying to bring the perception to. The signal coming out of the controller is the output, action or behaviour of the controller. The action is being emitted in order to bring the perception towards the reference. The controller is controlling the relationship between its two input signals, trying to make that relationship the identity. The italicised words are somewhere between definitions and descriptions. They are the usual words used to name these signals in PCT, but this usage is an instance of their everyday meanings.
In concrete terms, a thermostat’s perception is (some measure of) the actual temperature. Its reference signal is the setting of the desired temperature on a dial. Its output or behaviour is the signal it sends to turn the heat source on and off. In a well-functioning control system, one observes that as the reference changes, the perception tracks it very closely, while the output signal has zero correlation with both of them. The purpose of the behaviour is to control the perception—hence the title of William Powers’ book, “Behavior: The Control of Perception”. All of the behaviour of living organisms is undertaken for a purpose: to bring some perception close to some reference.
Yeah, that paragraph was sloppy and the previous sentence didn’t add much, so I deleted it and reworded the sentence you quoted. I’m used to flipping my perspective around a system, and thus ‘output’ and ‘input’ are more like ‘left’ and ‘right’ to me than invariant relationships like ‘clockwise’ and ‘counterclockwise’—with the result that I’ll sometimes be looking at something from the opposite direction of someone else. “Left! No, house left!”
(In this particular case, the system output and the controller input are the same thing, and the system input is the disturbance that the controller counteracts, and I assumed you didn’t have access to the controller’s other input, the reference.)