But note that being a good researcher does not automatically translate to also being a good teacher. I’d put less emphasis on how many citations they have and more on how good they are at actually teaching.
To find out how good someone is at teaching, you can use a resource like http://www.ratemyprofessors.com/ (if you live in the right country, which I don’t) or simply ask around.
I’ve gotten into the habit of pointing out, whenever other students at my university make reference to ratemyprofessors.com, that the selection bias on that site is huge. It’s not uncommon to see professors with dozens of extremely positive reviews, dozens more highly negative reviews, and very few—if any—neutral reviews. Naturally, the negative reviews appear most frequently because “grr, I feel like this professor graded too harshly” provides the strongest motivation for posting a disgruntled comment.
I don’t know of any other place that does this, but the University of Washington maintains a course evaluation system (with data made available to all students), to gather quarterly feedback on the performance of professors and TAs in such a way that at most ~5% students fail to fill out the questionnaires.
CSUs and UCs do this (or at least where I’ve been they do); while these evals might be less biased they are more than proportionately less accessible.
Also ratemyprofessors.com has different ratings for “easiness” “enthusiasm” etc., so instead of looking at “highest rated” professors looking at the actual reviews would be a bit more informative.
Compared with ratemyprofessors, which is available to everyone online, I don’t think the evaluations written by students (at least in California) are publicly available at all. I could be wrong, but I don’t know anyone who has ever seen one (other than the person being evaluated).
This paper is widely reported as saying that student evaluations anti-correlate with performance in later classes. I haven’t read the paper, but I think that might be oversimplify the claim.
You might expect this result, if popular teachers are easy and don’t push the students, but that’s definitely not what’s happening in this military academy with a uniform curriculum. But if what’s popularly perceived as the dominant force in (american) evaluations has been eliminated, it’s not clear whether this tells us much (about other american schools).
A casual glance at the abstract leads me to read the paper’s conclusion more as “Teachers who have easy classes and teach to the test provide worse foundations and get better evaluations.” This seems like a pretty likely hypothesis that would explain some of the correlation. Some evidence could be gathered for it from ratemyprofs.
I’ll read it further when I have time to check for things like linear regression.
ETA: that study looks really good. I am curious with how the data would be effected if students consciously rated easiness separately.
But note that being a good researcher does not automatically translate to also being a good teacher. I’d put less emphasis on how many citations they have and more on how good they are at actually teaching.
To find out how good someone is at teaching, you can use a resource like http://www.ratemyprofessors.com/ (if you live in the right country, which I don’t) or simply ask around.
I’ve gotten into the habit of pointing out, whenever other students at my university make reference to ratemyprofessors.com, that the selection bias on that site is huge. It’s not uncommon to see professors with dozens of extremely positive reviews, dozens more highly negative reviews, and very few—if any—neutral reviews. Naturally, the negative reviews appear most frequently because “grr, I feel like this professor graded too harshly” provides the strongest motivation for posting a disgruntled comment.
I don’t know of any other place that does this, but the University of Washington maintains a course evaluation system (with data made available to all students), to gather quarterly feedback on the performance of professors and TAs in such a way that at most ~5% students fail to fill out the questionnaires.
CSUs and UCs do this (or at least where I’ve been they do); while these evals might be less biased they are more than proportionately less accessible.
Also ratemyprofessors.com has different ratings for “easiness” “enthusiasm” etc., so instead of looking at “highest rated” professors looking at the actual reviews would be a bit more informative.
How so?
Compared with ratemyprofessors, which is available to everyone online, I don’t think the evaluations written by students (at least in California) are publicly available at all. I could be wrong, but I don’t know anyone who has ever seen one (other than the person being evaluated).
This paper is widely reported as saying that student evaluations anti-correlate with performance in later classes. I haven’t read the paper, but I think that might be oversimplify the claim.
You might expect this result, if popular teachers are easy and don’t push the students, but that’s definitely not what’s happening in this military academy with a uniform curriculum. But if what’s popularly perceived as the dominant force in (american) evaluations has been eliminated, it’s not clear whether this tells us much (about other american schools).
A casual glance at the abstract leads me to read the paper’s conclusion more as “Teachers who have easy classes and teach to the test provide worse foundations and get better evaluations.” This seems like a pretty likely hypothesis that would explain some of the correlation. Some evidence could be gathered for it from ratemyprofs.
I’ll read it further when I have time to check for things like linear regression.
ETA: that study looks really good. I am curious with how the data would be effected if students consciously rated easiness separately.