Study: Students show bias against female professors in teaching evaluations

Authors say findings raise questions about value of evaluations

Students tend to rate male professors more highly than female professors in teaching evaluations, according to a study from researchers at the University of California-Berkeley and Sciences Po in Paris, Anya Kamenetz reports for NPR.

The study involved running a series of statistical tests on academic and course evaluation data from two separate groups of French and U.S. university students. Researchers randomly assigned French students were to either male or female section leaders in their required courses; then, they compared student evaluations with their scores on a standardized final exam.

Male students rated their male professors more highly than female professors across all evaluation criteria. However, researchers determined no correlation between students rating their professors more highly and actually performing better on the final exam. 

Meanwhile, U.S. students took an online class with either a female or male professor. Half of the time, male professors used female names, and vice-versa. In this case, female students rated professors they believed to be male more highly than those they believed to be female across all evaluation criteria.

Findings conflict with earlier studies

According to Michael Grant, vice provost and associate vice chancellor for undergraduate education at the University of Colorado-Boulder, a study from 2000 and another from 2002 found that student evaluations are not biased against female instructors.

"There are multiple, well-designed, thoughtfully conducted studies that clearly contradict this very weakly designed study," he says.

But economist and lead author Anne Boring of Sciences Po and co-author Philip Stark, associate dean of the division of mathematical and physical sciences at the University of California-Berkeley, say their findings lead them to question the value of Student Evaluations of Teaching (SET) entirely.

"Trying to adjust for the bias to make SET 'fair' is hopeless (even if they measured effectiveness, and there's lots of evidence that they don't)," Stark writes.

Boring acknowledges that "SETs can contain some information that can be valuable" but contends the bias overshadows any objective measure of educators' effectiveness (Kamenetz, NPR, 1/25).


Next in Today's Briefing

UT system explores next-generation models of higher ed

Next Briefing

  • Manage Your Events
  • Saved webpages and searches
  • Manage your subscriptions
  • Update personal information
  • Invite a colleague