Breaking Bad Habits

Categories: Bargaining, Bargaining Unit, Bargaining Updates 2022, Equity, student evaluations of teaching

They are clearly biased against racialized and other historically marginalized groups; they are statistically unreliable; and they do not and cannot answer their intended purpose. Yes, we are talking (again) about student surveys, the cheap candy of teaching metrics.

As we said in 2019, it is hard to imagine an instrument more ill-suited to its supposed value than student opinion surveys. Leaving aside the statistical limitations involved in what are often small non-randomized samples, these surveys cannot provide accurate information about the skill of the instructor or the effectiveness of their teaching. They tell us instead, as one would normally expect from a survey, about the students themselves.

In May 2020 both the UBCV and UBCO Senates acknowledged this truth.  They ordered new clothes for the “Student Evaluation of Teaching” surveys; now the Senates refer to these polls as “Student Experience of Instruction” surveys. As the new https://seoi.ubc.ca/ website declares, “End-of-course student surveys provide the university community with information about student experience of their instruction at UBC.” We couldn’t have put it better ourselves: these surveys tell us about student experience. What they manifestly do not and cannot assess is “teaching effectiveness,” the criterion on which our teaching is supposed to be assessed for all summative employment purposes (merit/PSA; reappointment; tenure; promotion).  The UBC Joint Senate Task-Force is clear on this in their final report, approved by both Senates:  “questions for students should focus on their experiences…rather than positioning the request as a formal and global evaluation of the teacher” (p. 4); “students are not in a position to be able to make sweeping, all-inclusive judgments about the effectiveness of instruction” (p. 5).  Yes.  This is why we are telling UBC again in this bargaining round to stop using such surveys as summative measures of teaching effectiveness. {see Proposal 17 in our Day One list}.

We are also proposing to remove these surveys as summative tools because they have been shown in countless studies to be particularly informed by biases on prohibited grounds.  Even the UBC Senate report acknowledges that “there are serious concerns around the potential impact of various biases, particularly gender and ethnicity, as well as instrument design, reporting metrics, interpretation of data, consideration of context, and lack of integration with other forms of data on the effectiveness of teaching.”(p. 3);  “studies have presented evidence of bias on the basis of instructor ethnicity” (p. 6).

UBC faculty members have lived this reality.  The recently-released Report of the President’s Task Force on Anti-Racism and Inclusive Excellence repeatedly attests to the biases in these data.  The Faculty Committee on the Task Force notes that “in teaching, IBPOC faculty members often face inequity due to implicit bias. They are often…unfairly evaluated by students” (p. 246). The Committee ultimately recommends that student surveys be eliminated “as discriminatory” (p. 247). We concur. 

Our UBC colleagues are not the only ones observing this problem in their own and their colleagues’ classrooms: expert studies, meta-studies, and reports on these biases abound.  Two such reports form the basis for the key Canadian arbitration award (Ryerson University) on this issue:

The arbitrator in the Ryerson case ruled that, based on this overwhelming evidence showing both the bias in and the non-probative value of student opinion surveys, those surveys were not to be used to measure teaching effectiveness for promotion or tenure. 

The academic community seems to have gotten the memo.  In addition to Ryerson, Algonquin, Rutgers, the University of Alberta, the University of Oregon, SFU and the University of Southern California are already making moves in this direction.  As Michael Quick, Provost of the University of Southern California, said, “I’m done. I can’t continue to allow a substantial portion of the faculty to be subject to this kind of bias.” (Inside Higher Ed, May 22, 2018). The American Sociological Association and 17 other major associations in a 2019 report agree as well, citing the avalanche of research demonstrating how student surveys are statistically untrustworthy, inappropriately applied, and biased on multiple prohibited grounds. 

It is true that UBC faculty members do sometimes glean from students’ reported experiences information that can be helpful as we work on our course-designs and methods; we are thus willing to see student surveys used for these formative purposes.  We think that this is what students want most, and it is what we want as well: to make our teaching better.  But to continue to pretend that these surveys tell us objectively and fairly whether or not a colleague is an effective teacher is to continue in a lie.  Once we persist in a position in spite of the overwhelming evidence, we can hardly call ourselves scholars (politicians, perhaps??!).  We may have become accustomed to using these data to answer questions they cannot properly or even safely address, but that is no reason to pretend that cheap candy is nutritious food.