Students & professors agree on student evaluations: it’s a complete waste of time. Students and professors have something in common when it comes to student evaluations of teachers. They both view it as a complete waste of time.
Students treat evaluations as a mere formality, one unlikely to impact the behavior of any given educator. And with classroom days waning and a final exam on the immediate horizon, students may not be particularly inclined to provide meaningful critical analysis of professorial pedagogy on a multiple-choice form that, itself, looks suspiciously like a final exam.
For their part, professors regard evaluations as deeply biased and subjective in nature, a forum for students to air their grievances over the difficulty of a course, the general appeal of the subject matter, or the professor’s poor choice of wardrobe, all with the courage and candor that come of anonymity.
In the middle is a mountain of paper destined for the recycling bin, dotted with arbitrarily selected ratings, and scrawled with handwritten constructive criticism that proves just how valuable spell-check is.
Student evaluations should be useful. They should facilitate a feedback loop between student and educator that can ultimately improves the experience for both. They should help university administrators identify those professors and courses of study that most reflect the quality, values, and standards of the institution in question, as well as to intervene when these features are not reflected.
They should do all these things, but they don’t.
A language professor and educational columnist for Slate, Rebecca Schuman, voices the frustration of her fellow professors, observing that while student assessments are generally absurd and biased, the greater problem is that they are also basically useless.
She notes that during a decade of receiving student assessments that were largely positive, she gained nothing of concrete value and received no meaningful insight into her own pedagogical approach from her students. Tweet this!
Student assessments grade professors on entertainment skill & teaching the test. Instead, Schuman found both in her own evaluations and in those granted to other professors, that student assessments revolved entirely around how entertaining professors were and how directly they taught to the test.
Ergo, if a professor simply spent a semester reading off an answer-key like Robin Williams in Dead Poets Society, his evaluations would be decidedly positive. This may say little about the effectiveness of his approach to instruction.
The reverse is also true, says Schuman. Negative evaluations often follow professors who challenge their students, who assign burdensome workloads, or who simply preside over subjects which themselves are more challenging and burdensome. A professor of astrophysics isn’t likely to dumb it down just because of a few scathing reviews.
As Schuman points out, the only educators who are really impacted by their evaluations are adjuncts, who may rely on this feedback to earn contract renewals. This suggests that there is a rigid hierarchy where those at the bottom are the most vulnerable to negative evaluations. The same criticism that loses an adjunct his or her paying job may bounce off the tenured professor without making a dent.
Before the Firing Squad
This is probably what Iowa State Senator Mark Chelgren (R) was thinking when he introduced a bill in January that would require every university to annually dismiss those professors who perform below a minimum acceptable standard on student evaluations. The termination would apply regardless of tenure status.
Iowa professors on the lower end of the popularity spectrum need not worry just yet. The bill is not exactly speeding to enactment. However, it does evoke a discussion that merits further consideration. The Senator’s bill is a decidedly extreme and not altogether realistic way of approaching the problem, but it does highlight an important point.
Students are paying a lot to go to college which, in the grand scheme of things, makes them consumers. As consumers, says the Senator, students have a right to be assured that they are getting the best product for their money. The ability of a professor to educate, enlighten, inform, and, yes, entertain to an extent, will all play a part in fulfilling that right.
In addition to helping professors improve their practice and providing universities with a quantifiable way of rating said practice, Senator Chelgren says evaluations should be designed to help students shape the product for which they are footing the bill.
The Senator’s point is well taken, but his target may be a little off center. Like consumers in any other market, students aren’t always the best judge of what’s good for them. Easy professors rate well and tough ones rate lower, regardless of what students are learning, which doesn’t make for a useful assessment of what is most educationally nutritious. It’s a lot easier to eat a bag of Doritos than a plate of spinach, but not necessarily better.
Power in Anonymity
Of course, self-interest in evaluations is to be expected. But there is also a more troubling pattern of evaluative miscarriage evident in student assessments. After a semester of pent-up resentment, students have historically used evaluations to unleash any number of prejudices, biases, and personal attacks. Criticism runs the gamut from hostility toward daily fashion decisions and offending facial hair to objections over a professor’s race, handicaps, or gender.
As to the latter point, an article in the Washington Post finds that women do historically score lower on their student evaluations, particularly in larger lecture hall settings. The article suggest that this is based on a student assumption of “role incongruity,” in which woman must overcome greater perceptive hurdles in order to be seen as having professorial authority.
These biases do represent a significant problem, suggests the Washington Post. Whether we are placing meaningful stock in evaluations in a collective way or not, the negative bias toward women may discourage many individual educators from seeking a larger professorial platform. This only widens the educational gender gap.
Finding the Value in Evaluations
Reflecting for a moment, once again, on Senator Chelgren’s proposal, we can immediately begin to see the danger represented by student evaluations, unreliable and subjective as they are. Vesting too much stock in their outcomes—as in the Senator’s proposal, as well as in the reality faced by those already on the lower registers of the educational totem pole—is neither fair nor effective.
Before considering student evaluations as an effective way to judge professors, we must judge the instrument itself and the way it is administered. First and foremost, suggests Schuman, there is little value served by student anonymity. As Schuman points out, professors only receive their evaluations after grades have already been submitted, reducing the likelihood of petty or unethical retaliation against a critical student.
This, therefore, raises the question: “What are we protecting students from?”
Today, one could argue that student evaluations merely extend the protective cover of anonymity that millennials enjoy every day on the internet. Only, instead of making snarky comments on a Justin Bieber fansite or explaining why the fans of an opposing sports team are stupid, students can use this cover to impact the job stability, livelihood, and psyche of their teachers.
Make students accountable for their feedback. We might be amazed at how quickly the biases and personal attacks slink back into the woodwork. Some students may be withholding of their harshest criticism, particularly those who might anticipate spending a future semester with the same professor. However, those students who have the most to offer, either in terms of pointed criticism or directed praise, will be happy to attach their name to an honest consumer evaluation.
An article in NPR also suggests that student evaluations should be taken with a grain of salt and that said grain should be provided by way of peer or mentor evaluation. By combining a fair, objective, and apolitical method of peer evaluation with the findings of student evaluation, we may get a more accurate understanding of the relationship between an instructor’s performance and the student experience.
This approach should also open the door for a more fluid and collegial relationship between younger professors and their more experienced counterparts. This multi-layered approach to evaluation can serve as a path to dialogue not just between student and educator, but also between mentor and mentee. It is thus that evaluation can be used as more than something punitive. Evaluation can and should be a teaching instrument.
If the intention is to hold professors truly accountable for what they bring to the academic table, it seems only fair that the method by which we do so is itself academically sound and accountable.
Kamenetz, A. (Sept. 26, 2014) “Student Course Evaluations Get An ‘F.’” NPR. Online at http://www.npr.org/blogs/ed/2014/09/26/345515451/student-course-evaluations-get-an-f
Schuman, R. (April 24, 2014) “Needs Improvement.” Slate. Online at http://www.slate.com/articles/life/education/2014/04/student_evaluations_of_college_professors_are_biased_and_worthless.html
Voeten, E. (Oct. 2, 2013) “Student Evaluations of Teaching Are Biased. Does It Matter?” Washington Post. Online at http://www.washingtonpost.com/blogs/monkey-cage/wp/2013/10/02/student-evaluations-of-teaching-are-probably-biased-does-it-matter/
Will, M. (April 23, 2015) “Iowa Legislator Wants to Give Students the Chance to Fire Underwhelming Faculty.” The Chronicle of Higher Education. Online at http://chronicle.com/article/Iowa-Legislator-Wants-to-Give/229589/
1. Schuman, 2014
3. Will, 2015
5. Voeten, 2013
7. Kamanetz, 2014