It seems to me that there are two different heuristics here and it is worth separating them.
But first I should explain why I think my initial reading of this post suggests heuristics that I think are problematic. The mere existence of CBT does not seem like strong evidence for psychology. It is no more evidence for modern mainstream psychology than freudian psychoanalysis is evidence for freudian psychology. As I understand it, CBT is gaining market share against other forms of talk therapy, but largely because of academic authority, roughly the same way that the other therapies got established. I am a fan of CBT because its proponents claim to do experiments distinguishing its efficacy from that of other talk therapies and failing to distinguish other talk therapies from talking to untrained people (which is still useful). But why do I need CBT for that? I can check that mainstream psychologists are more enthusiastic about experiments than freudian ones without resorting to the particular case of CBT. Similarly, competing nutritional theories are successful in the marketplace, sold both by large organizations with advertising budgets (Weight Watchers vs Atkins) and personal trainers working by word of mouth. But I agree that they example of CBT sheds light on psychology.
One heuristic is that experiments with every-day comprehensible goals are more useful for evaluating a field than experiments of technical claims. Most obviously, it is easier to evaluate the value of the knowledge demonstrated by such experiments than technical knowledge. Knowing that statins lower cholesterol is only useful if I trust the medical consensus on cholesterol, but knowing that they lower all-cause mortality is inherently valuable (though if the population of the experiment was chosen using cholesterol, this is also evidence that the doctors are correct about cholesterol). Similarly, the efficacy of CBT shows that psychologists know useful things, and not just trivia about what people do in weird situations. Moreover, I suspect that such experiments are more reliable than technical experiments. In particular, I suspect that they are less vulnerable to publication bias and data-mining. Certainly, I have to learn about technical measures to determine how vulnerable technical experiments are to experimenter bias.
The other heuristic is that selling a theory to someone else is a good sign. Unfortunately, this seems to me of limited value because people buy a lot of nonsense, not just competing psychological and nutritional theories, but also horoscopes. How does the military differ from academic psychologists? I’m sure it hires a lot of them. They do much larger and longer experiments than academics. They do more comprehensive experiments, with better measures of success, analogous to the advantage of all-cause mortality over number of heart attacks (let alone cholesterol). They could eliminate publication bias because they know all the studies they’re doing, but only if the people in charge understand this issue; and there is still is some kind of bias in the kind of studies they let me read. These are all useful advantages, but in the end it does not look very different to me than the academic psychology we’re trying to evaluate. Similarly, industry consumes a lot of biological and chemical research, which is evidence that the research is, as a whole, real, but it fails to publish attempts to replicate, so the information is indirect. On the other hand, these industries, like the military, use the knowledge internally, which is better evidence than commercial CBT and nutrition, which try to sell the knowledge directly, and mainly demonstrate the value of academic credentials to selling knowledge.
Right, my examples were selected for a) presence of spinoffs, and b) evidence that the spinoffs were substantive. E.g. I excluded psychic hotlines and Freudian analysis.
It seems to me that there are two different heuristics here and it is worth separating them.
But first I should explain why I think my initial reading of this post suggests heuristics that I think are problematic. The mere existence of CBT does not seem like strong evidence for psychology. It is no more evidence for modern mainstream psychology than freudian psychoanalysis is evidence for freudian psychology. As I understand it, CBT is gaining market share against other forms of talk therapy, but largely because of academic authority, roughly the same way that the other therapies got established. I am a fan of CBT because its proponents claim to do experiments distinguishing its efficacy from that of other talk therapies and failing to distinguish other talk therapies from talking to untrained people (which is still useful). But why do I need CBT for that? I can check that mainstream psychologists are more enthusiastic about experiments than freudian ones without resorting to the particular case of CBT. Similarly, competing nutritional theories are successful in the marketplace, sold both by large organizations with advertising budgets (Weight Watchers vs Atkins) and personal trainers working by word of mouth. But I agree that they example of CBT sheds light on psychology.
One heuristic is that experiments with every-day comprehensible goals are more useful for evaluating a field than experiments of technical claims. Most obviously, it is easier to evaluate the value of the knowledge demonstrated by such experiments than technical knowledge. Knowing that statins lower cholesterol is only useful if I trust the medical consensus on cholesterol, but knowing that they lower all-cause mortality is inherently valuable (though if the population of the experiment was chosen using cholesterol, this is also evidence that the doctors are correct about cholesterol). Similarly, the efficacy of CBT shows that psychologists know useful things, and not just trivia about what people do in weird situations. Moreover, I suspect that such experiments are more reliable than technical experiments. In particular, I suspect that they are less vulnerable to publication bias and data-mining. Certainly, I have to learn about technical measures to determine how vulnerable technical experiments are to experimenter bias.
The other heuristic is that selling a theory to someone else is a good sign. Unfortunately, this seems to me of limited value because people buy a lot of nonsense, not just competing psychological and nutritional theories, but also horoscopes. How does the military differ from academic psychologists? I’m sure it hires a lot of them. They do much larger and longer experiments than academics. They do more comprehensive experiments, with better measures of success, analogous to the advantage of all-cause mortality over number of heart attacks (let alone cholesterol). They could eliminate publication bias because they know all the studies they’re doing, but only if the people in charge understand this issue; and there is still is some kind of bias in the kind of studies they let me read. These are all useful advantages, but in the end it does not look very different to me than the academic psychology we’re trying to evaluate. Similarly, industry consumes a lot of biological and chemical research, which is evidence that the research is, as a whole, real, but it fails to publish attempts to replicate, so the information is indirect. On the other hand, these industries, like the military, use the knowledge internally, which is better evidence than commercial CBT and nutrition, which try to sell the knowledge directly, and mainly demonstrate the value of academic credentials to selling knowledge.
Right, my examples were selected for a) presence of spinoffs, and b) evidence that the spinoffs were substantive. E.g. I excluded psychic hotlines and Freudian analysis.