Thank you for linking to the original paper. I read it, and I no longer think this was a completely valid experiment. Or, at least, it is not as strong as it seems to be.
The questionnaire that they gave subjects read:
Did he dwell upon the obvious?
Did he seem interested in his subject?
Did he use enough examples to clarify his material?
Did he present his material in a well-organized form?
Did he stimulate your thinking?
Did he put his material across in an interesting way?
Have you read any of this speaker’s publications?
Few of these seem completely incompatible with illogical nonsense. I’m sure he sounded interested in his subject, and for all I know he used lots of examples and put things across in an interesting way (Something like “educating physicians is much like hunting tigers, because of the part with the stethoscopes” is interesting and provides examples, but still total nonsense).
Another section asked for written comments, and they received comments like “Enjoyed listening”, “Has warm manner”, “Good flow, seems enthusiastic”, all of which I’m sure were true, as well as a few like “Too intellectual a presentation”, “left out relevant examples”, and my favorite, “He misses the last few phrases which I believe would have tied together his ideas for me.” These last ones seem to me like face-saving ways of saying “I didn’t actually understand the slightest bit of what he was talking about”.
If you have a really nice, warm presenter, probably after a lot of old stuffy guys who have bored everyone in the conference to death, and you ask for evaluations that don’t really ask the questions you’re interested in but give enough waffle room to allow respondents to praise the presentation they don’t understand, I’m not at all surprised that people would do that.
Why, oh why, couldn’t the experimenters have included a simple “Did you or did you not understand what this man was talking about?” It almost seems suspicious, like they were worried they wouldn’t get as interesting a result.
...or maybe it’s just normal incompetence. I have this same problem with course evaluations in my own university: they consist entirely of closed questions on peripheral issues that force me to end up giving very positive evaluations to awful classes. For example, it might ask me to rate from 1 to 5 the answers to lots of questions like “Was the professor always available to help students?” and “Was the work load reasonable?” and other things I am forced to admit were satisfactory, but nothing like “Did the professor drone on in a monotone about his own research interests for two hours a day and never actually get to covering the course material?”
Well, I think the similarity to actual IRL course evaluations is probably intentional—they were probably modeling the questions on either a particular course evaluation questionnaire or a mixture of many. And this shows that course evaluations are pretty bad at picking out professors who cannot explain to people what they are talking about. Given how useful a little impenetrability can be in many fields of research, one wonders how intentional this might be...
Agreed. “Did he use enough examples to clarify his material?” and “Did he present his material in a well-organized form?” are the only relevant questions.
Why, oh why, couldn’t the experimenters have included a simple “Did you or did you not understand what this man was talking about?”
Yes, that would have been better..
I have this same problem with course evaluations in my own university.
My favorite are the course evaluations that the instructor picks up at the same time as the final exam. Before the final exam in a microeconomics course I was taking, I drew a graph on the board showing the distribution (as goods) of grades and evaluations, and showed there were benefits to trade. (I still have no way of knowing whether that had an effect or not.)
The problem is, that’s a one-shot prisoner’s dilemma, and a microecon professor ranks just below a literal sociopath in terms of how likely he is to defect on the one-shot prisoner’s dilemma.
Thank you for linking to the original paper. I read it, and I no longer think this was a completely valid experiment. Or, at least, it is not as strong as it seems to be.
The questionnaire that they gave subjects read:
Did he dwell upon the obvious?
Did he seem interested in his subject?
Did he use enough examples to clarify his material?
Did he present his material in a well-organized form?
Did he stimulate your thinking?
Did he put his material across in an interesting way?
Have you read any of this speaker’s publications?
Few of these seem completely incompatible with illogical nonsense. I’m sure he sounded interested in his subject, and for all I know he used lots of examples and put things across in an interesting way (Something like “educating physicians is much like hunting tigers, because of the part with the stethoscopes” is interesting and provides examples, but still total nonsense).
Another section asked for written comments, and they received comments like “Enjoyed listening”, “Has warm manner”, “Good flow, seems enthusiastic”, all of which I’m sure were true, as well as a few like “Too intellectual a presentation”, “left out relevant examples”, and my favorite, “He misses the last few phrases which I believe would have tied together his ideas for me.” These last ones seem to me like face-saving ways of saying “I didn’t actually understand the slightest bit of what he was talking about”.
If you have a really nice, warm presenter, probably after a lot of old stuffy guys who have bored everyone in the conference to death, and you ask for evaluations that don’t really ask the questions you’re interested in but give enough waffle room to allow respondents to praise the presentation they don’t understand, I’m not at all surprised that people would do that.
Why, oh why, couldn’t the experimenters have included a simple “Did you or did you not understand what this man was talking about?” It almost seems suspicious, like they were worried they wouldn’t get as interesting a result.
...or maybe it’s just normal incompetence. I have this same problem with course evaluations in my own university: they consist entirely of closed questions on peripheral issues that force me to end up giving very positive evaluations to awful classes. For example, it might ask me to rate from 1 to 5 the answers to lots of questions like “Was the professor always available to help students?” and “Was the work load reasonable?” and other things I am forced to admit were satisfactory, but nothing like “Did the professor drone on in a monotone about his own research interests for two hours a day and never actually get to covering the course material?”
Well, I think the similarity to actual IRL course evaluations is probably intentional—they were probably modeling the questions on either a particular course evaluation questionnaire or a mixture of many. And this shows that course evaluations are pretty bad at picking out professors who cannot explain to people what they are talking about. Given how useful a little impenetrability can be in many fields of research, one wonders how intentional this might be...
Agreed. “Did he use enough examples to clarify his material?” and “Did he present his material in a well-organized form?” are the only relevant questions.
Yes, that would have been better..
My favorite are the course evaluations that the instructor picks up at the same time as the final exam. Before the final exam in a microeconomics course I was taking, I drew a graph on the board showing the distribution (as goods) of grades and evaluations, and showed there were benefits to trade. (I still have no way of knowing whether that had an effect or not.)
The problem is, that’s a one-shot prisoner’s dilemma, and a microecon professor ranks just below a literal sociopath in terms of how likely he is to defect on the one-shot prisoner’s dilemma.
There are reputation effects!