Poma: University course evaluations are broken

Zheng Zheng, associate professor of physics and astronomy, covers key concepts for his universe class | Chronicle archives.

Zheng Zheng, associate professor of physics and astronomy, covers key concepts for his universe class | Chronicle archives.

By Sasha Poma, Assistant Opinion Editor

With spring semester coming to a close, every student at the University of Utah will be asked to fill out course evaluations for their classes. But what do these evaluations do? The U claims that the surveys factor into decisions like faculty promotions and may help students choose their instructors. Are they effective, and are we providing good feedback as students? The university states that emotionally-charged reactions, without firm reason, are not effective. That alone signals that unhelpful reviews are common. So the answer, at least for now, is probably not – and that’s scary, especially when our tuition, time and education are on the line. We need to create a better system for evaluating professors and find out if student reviews have any real validity.

Our school’s current protocol for instructor evaluations looks great on paper. The university uses an online evaluation system, which asks course-specific and instructor-specific questions pertaining to the quality of a class. Different departments also have the option to make their own questions, which appears to tailor this feedback fairly well. From there, the feedback enters the professor’s permanent file, which can be examined when promotions, tenure and course assignments arise. The numeric data such as satisfaction rates are plugged into a database for every course, professor and term. That feedback can potentially influence other students’ course and professor decisions.

Student evaluations affect our entire education because they can impact which classes are offered, which professors can teach them and more. There’s a reason why our professors are instructed to encourage us to fill out these reviews. And yes, our opinions should be taken into consideration — so long as we provide constructive feedback. We all have experiences with good and bad professors, and sometimes it’s difficult to keep our attitudes towards certain classes tame. But we need to ensure that we keep this privilege and maintain it so that our voices are taken seriously. If we can promote good teaching habits and politely deter questionable ones, we can influence our educational environment in meaningful ways.

The problem in this system is how students approach these evaluations. Student feedback can be inaccurate or skewed by various factors. For instance, studies show widespread gender biases in student evaluations. Male professors are typically rated on their intellect, while students scrutinize female professors’ appearances and strictness – meaning women faculty members may not receive the sincere feedback every instructor needs.

Students at the U can’t see each other’s written comments on these evaluations, but the biased remarks on Rate My Professor and similar websites can mislead students as they make decisions about their schedules. Good professors might receive poor reviews because of their stringent deadlines. Bad professors may be talked up because they hardly assign homework. These factors stand to hurt students in their educational decision making, and – when students offer this kind of feedback on formal evaluations – it can hurt professors as well.

At the same time, sampling biases can sometimes skew professors’ reviews as well. Students who loved a professor feel compelled to recommend the class to their peers. On the other hand, students with whom an instructor’s personality didn’t quite click might rush to warn their peers about that class. The remaining majority won’t contribute because they just took the class for credit, got a decent grade and moved on with their lives – their emotions didn’t sway them to leave feedback, for better or for worse. With all of these issues in play, there’s no real way to see whether students are a truly reliable source for instructor evaluations.

Both students and faculty are put in a tough spot, especially as professors grow more skeptical about the validity of students’ opinions. So how can we improve the current system? The National Education Association interviewed university faculty themselves and they offered a myriad of suggestions.

Collective feedback from other faculty members in addition to students provides different perspectives and can be more effective for improving a course. For purposes such as tenure and payment, the university evaluates professors by having a colleague sit in on one of their classes. Perhaps that same process can be implemented for every instructor every semester – not just when a promotion is due.

Questions for feedback could also be worded more specifically. For instance, instead of asking “Did the professor cancel class often?” an evaluation might ask “Did the professor cancel class enough to hinder the fulfillment of course requirements?” Specific questions reduce the likelihood of exaggerated or one-sided answers and help feedback reflect what’s really happening in the classroom – which would create a better learning environment for professors and students.

We need to handle course evaluations differently to provide more constructive feedback to our professors. The university should update its student evaluation system and encourage more students to be mindful of what they have to say and how influential our feedback can be – and we should keep our potential impact in mind as we fill out our course evaluations this spring.

 

[email protected]

@spoma301