Huh, this is a very different perspective than I’ve had at the U of C (Going into my third year undergraduate)
I’ve found that people chose a lot based on instructors. Even in my majors, chem and physics, where there’s almost no electives, people often defer taking courses for a year when there’s a better instructor.
As for evaluations, I can’t tell you what most other students do. But when talking about the evaluations what my friends to decide whether or not to take a class, most of us don’t pay much attention to the specific numbers they got, and focus on whether in the comments section people say the instructor knows what they’re talking about, is interesting, and has fair tests.
As a sidenote, saying U of X for every X isn’t a very specific determiner, and it appears you were attempting to be specific. For example, University of Calgary is the top-rated result for “U of C” on google, but you could easily be referring to University of Chicago or University of Connecticut, among other U of C institutions.
@falenas108, thanks for sharing your thoughts. I genuinely appreciate it.
I’ve found that people chose a lot based on instructors.
Yes, I agree. I think some of my post carried the connotation that they don’t, but that was unintended. I wrote:
Students routinely neglect instructor selection or use suboptimal criteria, relative to its importance. [...] Often, students decide their courses for the next semester or quarter after a meeting of a few hours with an advisor, based on time scheduling constraints.
So, to clarify, although I think it “often” happens that people select instructors randomly, it doesn’t always happen. My impression is that there’s heavy availability bias: if students can easily access evaluations, or know friends who have taken classes with a particular instructor, they’ll use that data. If not, they’ll select randomly. In general, my impression is that students are risk-averse and satisfice on this count: as long as an instructor seems “good enough” they won’t generally try to look for a better one (this is particularly true for people in multi-course sequences with an instructor, we could call this a “status quo bias” or an “endowment effect” depending on your perspective). More proactive approaches, such as sitting in on classes that the instructors are teaching in the previous term, seem to be relatively rare.
My other point is that even when students are making proactive choices, the criteria they use may be suboptimal. I am not aware of high quality advice that would help students select instructors effectively. This is, I believe, in contrast with the vast (though possibly not very high quality) literature available on how to select a college or a major. If intra-institutional variation in instructor quality is comparable to inter-institutional variation, then there are probably unrealized gains in selecting instructors according to better criteria.
As for evaluations, I can’t tell you what most other students do. But when talking about the evaluations what my friends to decide whether or not to take a class, most of us don’t pay much attention to the specific numbers they got, and focus on whether in the comments section people say the instructor knows what they’re talking about, is interesting, and has fair tests.
I’ve added an update to the post in response to this.
One thing I would note is that good teachers are rare and not very “good,” but bad teachers are common and very very bad. This ties into Trevor’s post: once you’ve avoided the bad teachers, it’s probably not worth the much larger effort required to find an extra-bonus-good teacher that may or may not exist.
@linkhyrule5: Thanks for your thoughts, I really appreciate it.
I’d be grateful if you could provide some context regarding the reference class of educational environments for which you’ve made this statement (that the main problem is avoiding really bad teachers, rather than looking for really good ones).
I should probably have qualified my earlier post, as the only data I have to draw on is my own anecdotal evidence.
Nevertheless, with the qualifications that I went to a private high school and am currently at an Ivy college:
Sample size: since freshman high school, 26 “teachers” and nine “professors.” Definitions: a teacher/professor is good-I if I was particularly interested in his/her course (and not just the subject), and good-R if I retained particularly more from that course. A teacher/professor is bad-I if I was particularly uninterested in his/her course (and not just the subject), and bad-R if I retained particularly little from that course.
Of my teachers, 2 good-I teachers and 0 good-R teachers, with two “maybes”—not particularly good but above-average. 3 teachers who were both bad-I and bad-R—all of these are extreme cases, in which I learned almost nothing and loathed the class.
Of my professors, 0 good-I and no data on good-R (I’m a sophomore), but already 2 bad-I and I highly suspect bad-R.
Huh, this is a very different perspective than I’ve had at the U of C (Going into my third year undergraduate)
I’ve found that people chose a lot based on instructors. Even in my majors, chem and physics, where there’s almost no electives, people often defer taking courses for a year when there’s a better instructor.
As for evaluations, I can’t tell you what most other students do. But when talking about the evaluations what my friends to decide whether or not to take a class, most of us don’t pay much attention to the specific numbers they got, and focus on whether in the comments section people say the instructor knows what they’re talking about, is interesting, and has fair tests.
As a sidenote, saying U of X for every X isn’t a very specific determiner, and it appears you were attempting to be specific. For example, University of Calgary is the top-rated result for “U of C” on google, but you could easily be referring to University of Chicago or University of Connecticut, among other U of C institutions.
In context, it’s almost certainly University of Chicago, given that this was the U of C mentioned in the post.
Yes, that’s a good point. I had forgotten about that mention by the time I got to the comments.
I assumed it was obvious here, because the OP said he was talking about the University of Chicago.
@falenas108, thanks for sharing your thoughts. I genuinely appreciate it.
Yes, I agree. I think some of my post carried the connotation that they don’t, but that was unintended. I wrote:
So, to clarify, although I think it “often” happens that people select instructors randomly, it doesn’t always happen. My impression is that there’s heavy availability bias: if students can easily access evaluations, or know friends who have taken classes with a particular instructor, they’ll use that data. If not, they’ll select randomly. In general, my impression is that students are risk-averse and satisfice on this count: as long as an instructor seems “good enough” they won’t generally try to look for a better one (this is particularly true for people in multi-course sequences with an instructor, we could call this a “status quo bias” or an “endowment effect” depending on your perspective). More proactive approaches, such as sitting in on classes that the instructors are teaching in the previous term, seem to be relatively rare.
My other point is that even when students are making proactive choices, the criteria they use may be suboptimal. I am not aware of high quality advice that would help students select instructors effectively. This is, I believe, in contrast with the vast (though possibly not very high quality) literature available on how to select a college or a major. If intra-institutional variation in instructor quality is comparable to inter-institutional variation, then there are probably unrealized gains in selecting instructors according to better criteria.
I’ve added an update to the post in response to this.
One thing I would note is that good teachers are rare and not very “good,” but bad teachers are common and very very bad. This ties into Trevor’s post: once you’ve avoided the bad teachers, it’s probably not worth the much larger effort required to find an extra-bonus-good teacher that may or may not exist.
@linkhyrule5: Thanks for your thoughts, I really appreciate it.
I’d be grateful if you could provide some context regarding the reference class of educational environments for which you’ve made this statement (that the main problem is avoiding really bad teachers, rather than looking for really good ones).
I should probably have qualified my earlier post, as the only data I have to draw on is my own anecdotal evidence.
Nevertheless, with the qualifications that I went to a private high school and am currently at an Ivy college:
Sample size: since freshman high school, 26 “teachers” and nine “professors.”
Definitions: a teacher/professor is good-I if I was particularly interested in his/her course (and not just the subject), and good-R if I retained particularly more from that course. A teacher/professor is bad-I if I was particularly uninterested in his/her course (and not just the subject), and bad-R if I retained particularly little from that course.
Of my teachers, 2 good-I teachers and 0 good-R teachers, with two “maybes”—not particularly good but above-average. 3 teachers who were both bad-I and bad-R—all of these are extreme cases, in which I learned almost nothing and loathed the class.
Of my professors, 0 good-I and no data on good-R (I’m a sophomore), but already 2 bad-I and I highly suspect bad-R.
Thanks, I appreciate the clarification.