In my experience, academically trained clinicians tend to think of everything they are trying to do as an intervention. Like, thinking of teaching as an intervention, where you want to get the participants to certain behavioral standards, and you are ultimately responsible for the efficacy of this intervention (especially if you can also define the expected outcomes). Imagine if you teach a class. After the final exam, you notice that there are a bunch of questions that no one got right. This is nothing to be proud of because either your measurement instrument (i.e., exam) is very off, or your intervention protocol (i.e., lesson plan) is very off. Similarly, clinical scientists would not be very proud of saying, “Yeah, I taught a course to 50,000 people and only 10 or 15 of them got an A, and this shows how much of a genius I am.” They would go back and revise their lesson plan. What if the students are just not very motivated, you say? Well, then we need to figure out a way to improve motivation, or adjust the measurement for the fact that most students are not very motivated, or adjust the intervention protocol so motivation matters less, or maybe all of the above.
Based on the excerpt, what David Burns is suggesting is not very new stuff. I’d be very surprised if the episode was recorded recently because the claim that “nobody’s measuring anything” is simply not true—it’s called routine outcome monitoring. For me, that was one of the first things to learn in a graduate-level clinical psychology class. Of course, there is a lot of research about it, so there is nothing mysterious about pre- and post-session measurements. Sounds almost like pointing at a large language model and saying, “Look at this massive linguistic network! It is really good!”
This claim really bothers me: “And the 40,000 hours of patients I had, I don’t think more than eight or ten ever contacted me for tune-ups.” My friend recently went to visit a physical therapist for a muscle pain issue. Over the course of the treatment, her pain got worse, but the therapist kept telling her that it was totally normal. But… the pain was really bad, and she felt like the therapist didn’t really understand how bad it was. She finished all 10 sessions as planned and never reached out to the therapist again. Plus, the sessions were expensive.
Believe it or not, humans can overfit too. Focusing on “challenging patients” (how do you operationally definechallenging anyway?), especially with the examples provided, sounds like a pretty bad idea. Responding to direct confrontation or insult with open acceptance is not even something I had to take a class to find out. These archetypical challenges are so saturated in the professional literature and dialogues, you kind of just pick them up at some point far earlier than receiving any practical training. I’ve seen quite a few first or second-year psychology undergrads quickly overfit to an elaborate but empty “you are right and your feelings are valid, so tell me more” response to a lot of minor confrontation or when someone expresses their feelings about anything. This kind of practice is not very informative if you want to have an empathetic yet helpful conversation about someone’s drinking problem when they have just recovered from a heart surgery. And, patients can get stuck or even deteriorate without verbally challenging the clinician.
In my experience, academically trained clinicians tend to think of everything they are trying to do as an intervention. Like, thinking of teaching as an intervention, where you want to get the participants to certain behavioral standards, and you are ultimately responsible for the efficacy of this intervention (especially if you can also define the expected outcomes). Imagine if you teach a class. After the final exam, you notice that there are a bunch of questions that no one got right. This is nothing to be proud of because either your measurement instrument (i.e., exam) is very off, or your intervention protocol (i.e., lesson plan) is very off. Similarly, clinical scientists would not be very proud of saying, “Yeah, I taught a course to 50,000 people and only 10 or 15 of them got an A, and this shows how much of a genius I am.” They would go back and revise their lesson plan. What if the students are just not very motivated, you say? Well, then we need to figure out a way to improve motivation, or adjust the measurement for the fact that most students are not very motivated, or adjust the intervention protocol so motivation matters less, or maybe all of the above.
Based on the excerpt, what David Burns is suggesting is not very new stuff. I’d be very surprised if the episode was recorded recently because the claim that “nobody’s measuring anything” is simply not true—it’s called routine outcome monitoring. For me, that was one of the first things to learn in a graduate-level clinical psychology class. Of course, there is a lot of research about it, so there is nothing mysterious about pre- and post-session measurements. Sounds almost like pointing at a large language model and saying, “Look at this massive linguistic network! It is really good!”
This claim really bothers me: “And the 40,000 hours of patients I had, I don’t think more than eight or ten ever contacted me for tune-ups.” My friend recently went to visit a physical therapist for a muscle pain issue. Over the course of the treatment, her pain got worse, but the therapist kept telling her that it was totally normal. But… the pain was really bad, and she felt like the therapist didn’t really understand how bad it was. She finished all 10 sessions as planned and never reached out to the therapist again. Plus, the sessions were expensive.
Believe it or not, humans can overfit too. Focusing on “challenging patients” (how do you operationally define challenging anyway?), especially with the examples provided, sounds like a pretty bad idea. Responding to direct confrontation or insult with open acceptance is not even something I had to take a class to find out. These archetypical challenges are so saturated in the professional literature and dialogues, you kind of just pick them up at some point far earlier than receiving any practical training. I’ve seen quite a few first or second-year psychology undergrads quickly overfit to an elaborate but empty “you are right and your feelings are valid, so tell me more” response to a lot of minor confrontation or when someone expresses their feelings about anything. This kind of practice is not very informative if you want to have an empathetic yet helpful conversation about someone’s drinking problem when they have just recovered from a heart surgery. And, patients can get stuck or even deteriorate without verbally challenging the clinician.
As for the app: “Change over time is not ‘treatment response’,” but feel free to prove me wrong with RCT.