Now that I’ve read it, I have to say I agree with you: it is not good evidence. At best, it’s an application of PCT to generate an interesting hypothesis or two.
Good. The experiment is, however, very good evidence for the hypothesis that R.S. Marken is a crank, and explains the quote from his farewell speech that didn’t make sense to me before:
Psychologists see no real problem with the current dogma. They are used to getting messy results that can be dealt with only by statistics. In fact, I have now detected a positive suspicion of quality results amongst psychologists. In my experiments I get relationships between variables that are predictable to within 1 percent accuracy. The response to this level of perfection has been that the results must be trivial! It was even suggested to me that I use procedures that would reduce the quality of the results, the implication being that noisier data would mean more.
The basic problem is that, generically, if your model uses more free parameters than data points, then it is mathematically trivial that you can get an exact fit to your data set, regardless of what the data are: thus you’ve provided exactly zero Bayesian evidence that your model fits this particular phenomenon.
(This is precisely the case in the paper you pointed me to. Marken asserts that his model successfully predicts the overall and relative error rates with high precision; but if these rates had been replaced with arbitrary numbers before being fed to him, he would have come up with different experimental values of the parameters, and claimed that his model exactly predicted the new error rates! This is known around here as an example of a fake explanation.)
The fact that Marken was repeatedly told this, interpreted it to mean that others were jealous of his precision, and continued to produce experimental “results” of the same sort along with bold claims of their predictive power, makes him a crank.
Anyhow...
The point I keep stressing is that, if cognitive-domain PCT is precise enough to do treatment with, then it can’t be bereft of experimental consequences; and no matter how appealing certain aspects of it might be intuitively, a lack of experimental support after 35 years looks pretty damning. If every cognitive circuit is so complicated that you can’t make an observable prediction (about an individual in varying circumstances, or different people in the same circumstances, etc) without assuming more parameters than data points… then PCT doesn’t actually teach you anything about cognition, any more than the physicists who ascribed fire and respiration to phlogiston actually learned anything from their theory.
You’ve pointed me to one experiment, which turned out to be the work of a crank; I’ve accordingly lowered the probability that PCT is valid in the cognitive domain, not because the existence of a crank proves anything against their hypothesis, but because that was the most salient experimental result that you could point to!
I’m still quite able to revise my probability estimate upwards if presented with a legitimate experimental result, but at the moment PCT is down in the “don’t waste your time and risk your rationality” bin of fringe theories.
Good. The experiment is, however, very good evidence for the hypothesis that R.S. Marken is a crank, and explains the >quote from his farewell speech that didn’t make sense to me before:
I can be a pretty cranky fellow but I think there might be better evidence of that than the model fitting effort you refer to. The “experiment” that you find to be poor evidence for PCT comes from a paper published in the journal Ergonomics that describes a control theory model that can be used as a framework for understanding the causes of error in skilled performance, such as writing prescriptions. The fit of the model to the error data in Table 1 is meant to show that such a control model can produce results that mimic some existing data on error rates (and without using more free parameters than data points; there are 4 free parameters and 4 data points; the fit of the model is, indeed, very good but not perfect).
But the point of the model fitting exercise was simply to show that the control model provides a plausible explanation of why errors in skilled performance might occur at particular (very low) rates. The model fitting exercise was not done to impress people with how well the control model fits the data relative to other models since, to my knowledge, there are no comparable models of error against which to compare the fit .As I said in the introduction to the paper, existing models of error (which are really just verbal descriptions of why error occurs) “tell us the factors that might lead to error, but they do not tell us why these factors produce an error only rarely.”
So if it’s the degree of fit to the data that you are looking for as evidence of the merits of PCT then this paper is not necessarily a good reference for that. Actually, a good example of the kind of fit to data you can get with PCT can be gleaned from doing one of the on-line control demos at my Mind Readings site, particularly the Tracking Task. When you become skilled at doing this task you will find that the correlation between the PCT model (called “Model” in graphic display at he end of each trial) and your behavior will be close to one. And this is achieved using a model with no free parameters at all; they are the parameters that have worked for many different individuals and they are now simply constants in the model.
OH, and if you are looking for examples of things PCT can do that other models can’t do, try the Mind Reading demo, where the computer uses a methodology based on PCT, called the Test for the Controlled Variable, to tell which of three avatars—all three of which are being moved by your mouse movements—is the one being moved intentionally.
The fact that Marken was repeatedly told this, interpreted it to mean that others were jealous of his precision, and
continued to produce experimental “results” of the same sort along with bold claims of their predictive power,
makes him a crank.
I don’t recall ever being told (by reviewers or other critics) that the goodness of fit of my (and my mentor Bill Powers’) PCT models to data was a result of having more free parameters than data points. And had I ever been told that I would certainly not have thought it was because others were jealous of the precision of our results. And the main reason I have continued to produce experimental results—available in my books Mind Readings, More Mind Readings and Doing Research on Purpose—is not to make bold claims about the predictive power of the PCT model but to emphasize the point that PCT is a model of control, the process of consistently producing pre-selected results in a disturbance prone world. The precision of PCT comes only from the fact that it recognizes that behavior is not a caused result of input or a cognitively planed output but a process of control of input. So if I’m a crank, it’s not because I imagine that my model of behavior fits the data better than other models; it’s because I think my concept of what behavior is is better than other concepts of what behavior is.
I believe Richard Kennaway, who is on this blog, can attest to the fact that, while I may not be the sharpest crayon in the box, I’m not really a crank; at least, no more of a crank than the person who is responsible for all this PCT stuff, the late (great) William T. Powers.
I hope all the formatting comes out ok on this; I can’t seem to find a way to preview it.
Actually, I left LessWrong about a year ago, as I judged it to have declined to a ghost town since the people most worth reading had mostly left. I’ve been reading it now and then since, and might be moved to being more active here if it seems worth it. I don’t think I have enough original content to post to be a part of its revival myself.
As Rick says, he can be pretty cranky, but is not a crank.
The basic problem is that, generically, if your model uses more free parameters than data points, then it is mathematically trivial that you can get an exact fit to your data set, regardless of what the data are: thus you’ve provided exactly zero Bayesian evidence that your model fits this particular phenomenon.
I’m not sure I follow you. I didn’t get the impression that Marken’s model had more tunable parameters than there were data points under study, or that it actually was tunable in such a way as to create any desired result.
If every cognitive circuit is so complicated that you can’t make an observable prediction (about an individual in varying circumstances, or different people in the same circumstances, etc) without assuming more parameters than data points...
I don’t follow how this is the case. If I establish that a person is controlling for, say, “having a social life”, and I know that one of the sub-controlled perceptions is “being on Twitter”, then I can predict that if I interfere with their twitter usage they’ll try to compensate in some way. I can also observe whether a person’s behavior matches their expressed priorities—i.e., akrasia—and attempt to directly identify the variables they’re controlling.
If at this point, you say that this is “obvious” and not supportive of PCT, then I must admit I’m still baffled as to what sort of result we should expect to be supportive of PCT.
For example, let’s consider various results that (ISTM) were anticipated to some extent by PCT. Dunning-Kruger says that people who aren’t good at something don’t know whether they’re doing it well. PCT said—many years earlier, AFAICT—that the ability to perceive a quality must inevitably precede the ability to consistently control that quality.
Which directly implies that “people who are good at something must have good perception of that thing”, and “people who are poor at perceiving something will have poor performance at it.”
That’s not quite D-K, of course, but it’s pretty good for a couple decades ahead of them. It also pretty directly implies that people who are the best at something are more likely to be aware of their errors than anyone else—a pretty observable phenomenon among high performers in almost any field.
I’m still quite able to revise my probability estimate upwards if presented with a legitimate experimental result, but at the moment PCT is down in the “don’t waste your time and risk your rationality” bin of fringe theories.
This baffles me, since AFAICT you previously agreed that it appears valid for “motor” functions, as opposed to “cognitive” ones.
I consider this boundary to be essentially meaningless myself, btw, since I find it almost impossible to think without some kind of “motor” movement taking place, even if it’s just my eyes flitting around, but more often, my hands and voice as well, even if it’s under my breath.
It’s also not evolutionarily sane to assume some sort of hard distinction between “cognitive” and “motor” activity, since the former had to evolve from some form of the latter.
In any event, the nice thing about PCT is that it is the most falsifiable psychological model imaginable, since we will sooner or later get hard results from neurobiology to confirm its truth or falsehood at successively higher levels of abstraction. As has previously been pointed out here, neuroscience has already uncovered four or five of PCT’s expected 9-12 hardware-distinctive controller levels. (I don’t know how many of these were known about at the time of PCT’s formulation, alas.)
I consider this boundary to be essentially meaningless myself, btw, since I
find it almost impossible to think without some kind of “motor” movement
taking place, even if it’s just my eyes flitting around, but more often, my hands
and voice as well, even if it’s under my breath.
I’m not sure I follow you. I didn’t get the impression that Marken’s model had more tunable parameters than there were data points under study, or that it actually was tunable in such a way as to create any desired result.
In the section “Quantitative Validation”, under Table 1, it says (italics mine):
The model was fit to the data in Table 1 by adjusting only the speed parameter, s, for each prescription component control system… The results in Table 1 show that the distribution of error types produced by the model corresponds almost exactly to the empirical distribution of these rates. The values of s that produced these results were 0.000684, 0.000669, 0.000731 and 0.000738 for the Drug, Dosage, Route and Other component writing control systems, respectively.
As you vary each speed component within the model, the fraction of errors by that component varies all the way from 0 to 1, rather independently of each other. Thus for any empirical or made-up distribution of the four error types, Marken would have calculated values for his four parameters that caused the model to match the four data points; so despite his claims, the empirical data offer literally zero evidence in favor of his model. Ditto with his claim that his model predicts the overall error rate.
Good. The experiment is, however, very good evidence for the hypothesis that R.S. Marken is a crank, and explains the quote from his farewell speech that didn’t make sense to me before:
The basic problem is that, generically, if your model uses more free parameters than data points, then it is mathematically trivial that you can get an exact fit to your data set, regardless of what the data are: thus you’ve provided exactly zero Bayesian evidence that your model fits this particular phenomenon.
(This is precisely the case in the paper you pointed me to. Marken asserts that his model successfully predicts the overall and relative error rates with high precision; but if these rates had been replaced with arbitrary numbers before being fed to him, he would have come up with different experimental values of the parameters, and claimed that his model exactly predicted the new error rates! This is known around here as an example of a fake explanation.)
The fact that Marken was repeatedly told this, interpreted it to mean that others were jealous of his precision, and continued to produce experimental “results” of the same sort along with bold claims of their predictive power, makes him a crank.
Anyhow...
The point I keep stressing is that, if cognitive-domain PCT is precise enough to do treatment with, then it can’t be bereft of experimental consequences; and no matter how appealing certain aspects of it might be intuitively, a lack of experimental support after 35 years looks pretty damning. If every cognitive circuit is so complicated that you can’t make an observable prediction (about an individual in varying circumstances, or different people in the same circumstances, etc) without assuming more parameters than data points… then PCT doesn’t actually teach you anything about cognition, any more than the physicists who ascribed fire and respiration to phlogiston actually learned anything from their theory.
You’ve pointed me to one experiment, which turned out to be the work of a crank; I’ve accordingly lowered the probability that PCT is valid in the cognitive domain, not because the existence of a crank proves anything against their hypothesis, but because that was the most salient experimental result that you could point to!
I’m still quite able to revise my probability estimate upwards if presented with a legitimate experimental result, but at the moment PCT is down in the “don’t waste your time and risk your rationality” bin of fringe theories.
I can be a pretty cranky fellow but I think there might be better evidence of that than the model fitting effort you refer to. The “experiment” that you find to be poor evidence for PCT comes from a paper published in the journal Ergonomics that describes a control theory model that can be used as a framework for understanding the causes of error in skilled performance, such as writing prescriptions. The fit of the model to the error data in Table 1 is meant to show that such a control model can produce results that mimic some existing data on error rates (and without using more free parameters than data points; there are 4 free parameters and 4 data points; the fit of the model is, indeed, very good but not perfect).
But the point of the model fitting exercise was simply to show that the control model provides a plausible explanation of why errors in skilled performance might occur at particular (very low) rates. The model fitting exercise was not done to impress people with how well the control model fits the data relative to other models since, to my knowledge, there are no comparable models of error against which to compare the fit .As I said in the introduction to the paper, existing models of error (which are really just verbal descriptions of why error occurs) “tell us the factors that might lead to error, but they do not tell us why these factors produce an error only rarely.”
So if it’s the degree of fit to the data that you are looking for as evidence of the merits of PCT then this paper is not necessarily a good reference for that. Actually, a good example of the kind of fit to data you can get with PCT can be gleaned from doing one of the on-line control demos at my Mind Readings site, particularly the Tracking Task. When you become skilled at doing this task you will find that the correlation between the PCT model (called “Model” in graphic display at he end of each trial) and your behavior will be close to one. And this is achieved using a model with no free parameters at all; they are the parameters that have worked for many different individuals and they are now simply constants in the model.
OH, and if you are looking for examples of things PCT can do that other models can’t do, try the Mind Reading demo, where the computer uses a methodology based on PCT, called the Test for the Controlled Variable, to tell which of three avatars—all three of which are being moved by your mouse movements—is the one being moved intentionally.
I don’t recall ever being told (by reviewers or other critics) that the goodness of fit of my (and my mentor Bill Powers’) PCT models to data was a result of having more free parameters than data points. And had I ever been told that I would certainly not have thought it was because others were jealous of the precision of our results. And the main reason I have continued to produce experimental results—available in my books Mind Readings, More Mind Readings and Doing Research on Purpose—is not to make bold claims about the predictive power of the PCT model but to emphasize the point that PCT is a model of control, the process of consistently producing pre-selected results in a disturbance prone world. The precision of PCT comes only from the fact that it recognizes that behavior is not a caused result of input or a cognitively planed output but a process of control of input. So if I’m a crank, it’s not because I imagine that my model of behavior fits the data better than other models; it’s because I think my concept of what behavior is is better than other concepts of what behavior is.
I believe Richard Kennaway, who is on this blog, can attest to the fact that, while I may not be the sharpest crayon in the box, I’m not really a crank; at least, no more of a crank than the person who is responsible for all this PCT stuff, the late (great) William T. Powers.
I hope all the formatting comes out ok on this; I can’t seem to find a way to preview it.
Best regards
Rick Marken
Actually, I left LessWrong about a year ago, as I judged it to have declined to a ghost town since the people most worth reading had mostly left. I’ve been reading it now and then since, and might be moved to being more active here if it seems worth it. I don’t think I have enough original content to post to be a part of its revival myself.
As Rick says, he can be pretty cranky, but is not a crank.
You know you’re replying to an 8-year-old thread, right?
I had no idea. I was just pointed to it recently from another list.
I’m not sure I follow you. I didn’t get the impression that Marken’s model had more tunable parameters than there were data points under study, or that it actually was tunable in such a way as to create any desired result.
I don’t follow how this is the case. If I establish that a person is controlling for, say, “having a social life”, and I know that one of the sub-controlled perceptions is “being on Twitter”, then I can predict that if I interfere with their twitter usage they’ll try to compensate in some way. I can also observe whether a person’s behavior matches their expressed priorities—i.e., akrasia—and attempt to directly identify the variables they’re controlling.
If at this point, you say that this is “obvious” and not supportive of PCT, then I must admit I’m still baffled as to what sort of result we should expect to be supportive of PCT.
For example, let’s consider various results that (ISTM) were anticipated to some extent by PCT. Dunning-Kruger says that people who aren’t good at something don’t know whether they’re doing it well. PCT said—many years earlier, AFAICT—that the ability to perceive a quality must inevitably precede the ability to consistently control that quality.
Which directly implies that “people who are good at something must have good perception of that thing”, and “people who are poor at perceiving something will have poor performance at it.”
That’s not quite D-K, of course, but it’s pretty good for a couple decades ahead of them. It also pretty directly implies that people who are the best at something are more likely to be aware of their errors than anyone else—a pretty observable phenomenon among high performers in almost any field.
This baffles me, since AFAICT you previously agreed that it appears valid for “motor” functions, as opposed to “cognitive” ones.
I consider this boundary to be essentially meaningless myself, btw, since I find it almost impossible to think without some kind of “motor” movement taking place, even if it’s just my eyes flitting around, but more often, my hands and voice as well, even if it’s under my breath.
It’s also not evolutionarily sane to assume some sort of hard distinction between “cognitive” and “motor” activity, since the former had to evolve from some form of the latter.
In any event, the nice thing about PCT is that it is the most falsifiable psychological model imaginable, since we will sooner or later get hard results from neurobiology to confirm its truth or falsehood at successively higher levels of abstraction. As has previously been pointed out here, neuroscience has already uncovered four or five of PCT’s expected 9-12 hardware-distinctive controller levels. (I don’t know how many of these were known about at the time of PCT’s formulation, alas.)
Or as Rodolfo Llinás puts it:
″… thinking may be nothing else but internalized movement.”
“So thinking is a premotor act.”
In the section “Quantitative Validation”, under Table 1, it says (italics mine):
As you vary each speed component within the model, the fraction of errors by that component varies all the way from 0 to 1, rather independently of each other. Thus for any empirical or made-up distribution of the four error types, Marken would have calculated values for his four parameters that caused the model to match the four data points; so despite his claims, the empirical data offer literally zero evidence in favor of his model. Ditto with his claim that his model predicts the overall error rate.
I’ll get to the rest of this later.