In Weapons of Math Destruction (2016), blogger, professor, algorithmic goddess, and self-proclaimed “math babe” Cathy O’Neil draws readers through a an eye-opening journey of life in the Digital Age. She documents our life in a sea of Big Data, where so many corners of society are controlled by algorithms: everything from our credit score, to our justice system, to the cost of our insurance, and even to our participation in our own democracies is increasingly micromanaged by mathematical formulae that are all but impenetrable to us mere mortals just down here swimming.
O’Neil is an unflinching advocate for the power of mathematics, but her unerring moral bent leads her to expose the dangerous rationale trapped away in some of those automated black boxes that help to govern our lives: the algorithms she terms “Weapons of Math Destruction” (WMDs). To fit O’Neil’s taxonomy — to be an algorithmic evil — mathematical models must satisfy three criteria: they must be opaque, they must have achieved scale, and they must be causing damage to the people or processes they impact. While the book is fascinating throughout, and a must-read for any whose incredulity at modern data madness is growing (see: Facebook’s role in the demise of what we somewhat optimistically term Western Democracy), O’Neil raises some important questions for Higher Education — a sector awash with data, with a strong ethical and societal imperative — through her exploration of the U.S. News College and University Rankings.
The U.S. News College and University Rankings system exemplifies maleficent modelling — a series of hunches formalised into a taxonomy — that led all institutions to start shooting for improvement on the same squishy variables. It led to schools sending false data to the reporters, and spending ungodly sums on improving the specific metrics that the U.S. News journalists and statisticians deigned to consider.
In O’Neil’s words,
If you look at this development from the perspective of a University President, it’s actually quite sad. Most of these people no doubt cherished their own college experience — that’s part of what motivated them to climb the academic ladder. Yet here they were at the summit of their careers dedicating enormous energy toward boosting performance in fifteen areas defined by a group of journalists at a second-tier newsmagazine. They were almost like students again, angling for good grades from a taskmaster. In fact, they were trapped by a rigid model, a WMD.
In its 30+ year history, the U.S News model has gathered a whole host of detractors. From an exposé in the San Francisco Chronicle in 2001 to O’Neil’s book, academics and the public have become increasingly skeptic. But the academy should be skeptical, too, about all kinds of rankings we’re leaning on — particularly those that come from outside of industry, or that seek to turn rudimentary proxies into a real-world analysis of the effectiveness of higher education. The web and our publications are littered with well-regarded rankings considered far less egregious than the U.S. News model.
When you cast the net wider, and with a skeptical eye, the fickleness of rankings comes into sharp focus. As an Oxford Alum, there’s an odd thrill of vindication seeing Oxford sitting pretty once again this year atop the Times Higher Education World Rankings (especially, perhaps, as we’re standing on the shoulders of Cambridge). Yet while Oxford is undoubtedly home to some of the smartest, most globally influential people I know, its climb to the top of the THE world rankings occurred in 2016, in part due to the unfortunate murder of a lion on a Zimbabwean nature preserve. When the donations flooded in to support the researchers who cared for Cecil — some £750k (or $1.1m) — Oxford’s already substantial research income (a critical data point in many rankings) saw an unexpected boost, and the institution over-leapt some of its American rivals. If something so trivial, so tangential, and so temporary can help land an institution on top of the world, is the ranking really telling us anything at all? Herein lies a more vital, a more fundamental question: what are such rankings for?
As the rankings spread into the institutions themselves—into their own self-regulation processes, their hiring and enrolment efforts, their spending habits—it behoves academics, institutional leaders, policy makers, and other educational stakeholders to remain cognisant of what we’re trying to achieve. Educational excellence, of course, as the U.K. has been not-so-quietly demonstrating, is not a readily reducible or quantifiable goal.
In my experience tutoring and teaching, giving students grades—numbers on the tops of their essays—does little to foster their ambition and intellectual creativity. Instead it leads to questions about improving scores, and competing with classmates, not about furthering individual understanding. (Never mind that it’s also completely arbitrary: give a student at Oxford a 75%, and they might call their parents to celebrate—unless, of course, it’s a visiting American student, who might be on the phone home to talk about how to proceed with their lawyers to have that changed to a more respectable and expected 95%). While I’m all for assessment and evaluative thinking, quantifying the qualitative seems more often than not to fuel reductiveness. Moreover, as O’Neil demonstrates persuasively, algorithms merely codify the status quo.
With Universities trapped in a cycle of bending and bowing to journalistic whim, and at the mercy of arbitrary events far outside their control, what can be done to break the homogeneity imposed by formalised quantifications of their core missions in teaching and research? To really revolutionize teaching and learning at the postsecondary level, it’s a question the academy needs to confront.
That’s something I’m going to think about tomorrow. [Update: it’s here.]