If the school has some “big names” who teach about their area of specialty, consider taking their classes. One of the very important researchers in my field was a professor at my university while I was there, but I had no idea at that time what he did, so I missed out.
How would I go about finding that out? Sort by Karma? Perhaps compare a table of teachers with a citation database. Has someone created such a system that you know of? (I could also ask people but that may give answers too late!)
Searching scholar.google for a person’s name, and counting the number of citations, should work pretty well. Eminent researchers are also likely to be full professors, and to be a bit older. I hate to endorse the use of cheap heuristics like citation counts, but since you’re looking for fast answers, that may be the best route.
But note that being a good researcher does not automatically translate to also being a good teacher. I’d put less emphasis on how many citations they have and more on how good they are at actually teaching.
To find out how good someone is at teaching, you can use a resource like http://www.ratemyprofessors.com/ (if you live in the right country, which I don’t) or simply ask around.
I’ve gotten into the habit of pointing out, whenever other students at my university make reference to ratemyprofessors.com, that the selection bias on that site is huge. It’s not uncommon to see professors with dozens of extremely positive reviews, dozens more highly negative reviews, and very few—if any—neutral reviews. Naturally, the negative reviews appear most frequently because “grr, I feel like this professor graded too harshly” provides the strongest motivation for posting a disgruntled comment.
I don’t know of any other place that does this, but the University of Washington maintains a course evaluation system (with data made available to all students), to gather quarterly feedback on the performance of professors and TAs in such a way that at most ~5% students fail to fill out the questionnaires.
CSUs and UCs do this (or at least where I’ve been they do); while these evals might be less biased they are more than proportionately less accessible.
Also ratemyprofessors.com has different ratings for “easiness” “enthusiasm” etc., so instead of looking at “highest rated” professors looking at the actual reviews would be a bit more informative.
Compared with ratemyprofessors, which is available to everyone online, I don’t think the evaluations written by students (at least in California) are publicly available at all. I could be wrong, but I don’t know anyone who has ever seen one (other than the person being evaluated).
This paper is widely reported as saying that student evaluations anti-correlate with performance in later classes. I haven’t read the paper, but I think that might be oversimplify the claim.
You might expect this result, if popular teachers are easy and don’t push the students, but that’s definitely not what’s happening in this military academy with a uniform curriculum. But if what’s popularly perceived as the dominant force in (american) evaluations has been eliminated, it’s not clear whether this tells us much (about other american schools).
A casual glance at the abstract leads me to read the paper’s conclusion more as “Teachers who have easy classes and teach to the test provide worse foundations and get better evaluations.” This seems like a pretty likely hypothesis that would explain some of the correlation. Some evidence could be gathered for it from ratemyprofs.
I’ll read it further when I have time to check for things like linear regression.
ETA: that study looks really good. I am curious with how the data would be effected if students consciously rated easiness separately.
How would I go about finding that out? Sort by Karma? Perhaps compare a table of teachers with a citation database. Has someone created such a system that you know of? (I could also ask people but that may give answers too late!)
Searching scholar.google for a person’s name, and counting the number of citations, should work pretty well. Eminent researchers are also likely to be full professors, and to be a bit older. I hate to endorse the use of cheap heuristics like citation counts, but since you’re looking for fast answers, that may be the best route.
But note that being a good researcher does not automatically translate to also being a good teacher. I’d put less emphasis on how many citations they have and more on how good they are at actually teaching.
To find out how good someone is at teaching, you can use a resource like http://www.ratemyprofessors.com/ (if you live in the right country, which I don’t) or simply ask around.
I’ve gotten into the habit of pointing out, whenever other students at my university make reference to ratemyprofessors.com, that the selection bias on that site is huge. It’s not uncommon to see professors with dozens of extremely positive reviews, dozens more highly negative reviews, and very few—if any—neutral reviews. Naturally, the negative reviews appear most frequently because “grr, I feel like this professor graded too harshly” provides the strongest motivation for posting a disgruntled comment.
I don’t know of any other place that does this, but the University of Washington maintains a course evaluation system (with data made available to all students), to gather quarterly feedback on the performance of professors and TAs in such a way that at most ~5% students fail to fill out the questionnaires.
CSUs and UCs do this (or at least where I’ve been they do); while these evals might be less biased they are more than proportionately less accessible.
Also ratemyprofessors.com has different ratings for “easiness” “enthusiasm” etc., so instead of looking at “highest rated” professors looking at the actual reviews would be a bit more informative.
How so?
Compared with ratemyprofessors, which is available to everyone online, I don’t think the evaluations written by students (at least in California) are publicly available at all. I could be wrong, but I don’t know anyone who has ever seen one (other than the person being evaluated).
This paper is widely reported as saying that student evaluations anti-correlate with performance in later classes. I haven’t read the paper, but I think that might be oversimplify the claim.
You might expect this result, if popular teachers are easy and don’t push the students, but that’s definitely not what’s happening in this military academy with a uniform curriculum. But if what’s popularly perceived as the dominant force in (american) evaluations has been eliminated, it’s not clear whether this tells us much (about other american schools).
A casual glance at the abstract leads me to read the paper’s conclusion more as “Teachers who have easy classes and teach to the test provide worse foundations and get better evaluations.” This seems like a pretty likely hypothesis that would explain some of the correlation. Some evidence could be gathered for it from ratemyprofs.
I’ll read it further when I have time to check for things like linear regression.
ETA: that study looks really good. I am curious with how the data would be effected if students consciously rated easiness separately.