Open Data for Higher Education: The Road to Democratizing University Metrics

shutterstock_source_code_McIek

When institutions are gaming national and global rankings with rampant internal citation, indulging in “manipulation around the edges” of experience, or spending huge amounts on crazy golf courses to increase applications by “pander[ing] to the fantasies of 18 year olds”, does it really benefit the students whose applications the institution is seeking? Or the faculty it hopes to allure? How free (and how creative) is research when it’s directed towards institutional citation? How much does the steady increase in tuition fees to support the rankings arms race benefit the students who bear the brunt of the costs? The answer, which should be quite obvious, is very little.

Like every good problem, we need to start with a “why”. What are rankings trying to offer, and how we might go about doing that a little better in this, “The Age of Information”?

Why Rank?

There’s a simple answer: consumerism. Higher education, far from offering students ‘a clear, conscious view of their own opinions and judgements, a truth in developing them, an eloquence in expressing them, and a force in urging them’ (the wish of John Henry Newman), is now a commodity in the labour force—the lowest common denominator for entering many professions. As the demand for graduates increased, the premium on the degree itself increased — and with it, the price for participation.

When an 18 year old is essentially taking out a mini-mortgage to pay for their education, the need to make the right decision weighs heavier. Choosing well and aiming high is the mark of a great investment.

It’s not that data – or its friendlier relative that I like to term “information” – isn’t valuable in this context: it’s more valuable than ever. But if higher education is going to play the consumer game (and whether it should is another question entirely), the data provided to students needs to be transparent, and reliable in answering a range of questions driven by individual needs.

A forced order ranking is by no means awful, but it does homogenize the higher education landscape, forcing colleges and universities to compete on the same metrics (and to compete using the same tactics). In reality, institutions should be focusing on their key differentiators — the things that will actually appeal to students. And that’s rarely a like-for-like comparison.

“Compare and Contrast…”: Or, A Humanities Approach to College Rankings

In literary studies there is, more or less, a Canon. And there are lists and lists and lists of the “100 Best Novels” – no two completely alike, of course, and most provoking indignant disbelief from critics. But we don’t simply tell our students to read those works — or those authors — to understand their machinations and grammatical habits, their quirks and idiosyncrasies. And we certainly don’t try to formalize and model something so qualitative with quantitative measures. Well, very few of us do.

But in the study of literature, one of the first things students are taught is to “compare and contrast”. We bring our own questions — our own preferences and pet-peeves — to our explorations, and determine not only what’s similar and comparable between greats, but also what’s different and differentiating. It’s this key second aspect that’s missing from the presentations of rankings – and one that would better serve institutions and their stakeholders when it comes to analyzing data.

But that kind of analysis – qualitative consideration over quantitative forced-order ranking – demands a much different approach to the underlying data that’s driving decision making.

The Road Ahead: Open Education Data and Increased Data Fluency

But the data isn’t always there for the diving. Sure, it’s compiled in bits and pieces by various studies (with a helpful “Best Of” available here). Institutions taking part in certain ranking systems and professional services can take advantage of some individual benchmarking. Would-be analysts variously have access to increasing portions of data: the U.S. Government’s Obama Administration made headway into affording students more personalized analysis options with their Open Data initiative; and South Africa, too, is making headway, with the Centre for Higher Education Trust’s Open Data publishing 26 key indicator metrics for the country’s public institutions along with in-depth guides for use. These are great beginnings, for a data commons, but they are by no means as textured and far-reaching as we might hope to accomplish.

And the journey doesn’t stop there. Not only are we missing a data commons, but access is far from the lowest hurdle; meaningful analysis demands adept statisticians and mathematical modellers moving through the available data to provide insight into its trends, both at large and with regards to the situation and questions at hand. Whether for students looking for the right-fit institution, administrators and faculty members looking to compare spending, or deans looking to promote a more diverse approach to hiring, way-stations are needed on the journey to digital empowerment.

Complicated privacy laws, siloed data, and institutional lethargy all contribute to stalling the start. But, luckily for institutions, Universities UK has gone some of the way to illuminate the promise and perils of Open Data for higher education with An Introductory Guide (2015). And for the digital natives filling the pipeline to University, there are a wealth of data manipulation courses seeping into pre-college curricula and enrichment activities.

With Open Higher Education Data, all of education’s many stakeholders will increasingly be able to answer a range of different questions and access reams of relevant data on a daily basis. But for this to come to pass, Universities need to stop thinking about data-driven rankings, and instead pool their data not in competition, but for mutual edification — thinking along the way about how best to bring their faculty, staff, and students along for the ride.

*Image: Mclek/Shutterstock.

Advertisement

How Rankings are Ruining Higher Education

In Weapons of Math Destruction (2016), blogger, professor, algorithmic goddess, and self-proclaimed “math babe” Cathy O’Neil draws readers through a an eye-opening journey of life in the Digital Age. She documents our life in a sea of Big Data, where so many corners of society are controlled by algorithms: everything from our credit score, to our justice system, to the cost of our insurance, and even to our participation in our own democracies is increasingly micromanaged by mathematical formulae that are all but impenetrable to us mere mortals just down here swimming.

O’Neil is an unflinching advocate for the power of mathematics, but her unerring moral bent leads her to expose the dangerous rationale trapped away in some of those automated black boxes that help to govern our lives: the algorithms she terms “Weapons of Math Destruction” (WMDs). To fit O’Neil’s taxonomy — to be an algorithmic evil — mathematical models must satisfy three criteria: they must be opaque, they must have achieved scale, and they must be causing damage to the people or processes they impact. While the book is fascinating throughout, and a must-read for any whose incredulity at modern data madness is growing (see: Facebook’s role in the demise of what we somewhat optimistically term Western Democracy), O’Neil raises some important questions for Higher Education — a sector awash with data, with a strong ethical and societal imperative — through her exploration of the U.S. News College and University Rankings.

The U.S. News College and University Rankings system exemplifies maleficent modelling — a series of hunches formalised into a taxonomy — that led all institutions to start shooting for improvement on the same squishy variables. It led to schools sending false data to the reporters, and spending ungodly sums on improving the specific metrics that the U.S. News journalists and statisticians deigned to consider.

In O’Neil’s words,

If you look at this development from the perspective of a University President, it’s actually quite sad. Most of these people no doubt cherished their own college experience — that’s part of what motivated them to climb the academic ladder. Yet here they were at the summit of their careers dedicating enormous energy toward boosting performance in fifteen areas defined by a group of journalists at a second-tier newsmagazine. They were almost like students again, angling for good grades from a taskmaster. In fact, they were trapped by a rigid model, a WMD.

In its 30+ year history, the U.S News model has gathered a whole host of detractors. From an exposé in the San Francisco Chronicle in 2001 to O’Neil’s book, academics and the public have become increasingly skeptic. But the academy should be skeptical, too, about all kinds of rankings we’re leaning on — particularly those that come from outside of industry, or that seek to turn rudimentary proxies into a real-world analysis of the effectiveness of higher education. The web and our publications are littered with well-regarded rankings considered far less egregious than the U.S. News model.

When you cast the net wider, and with a skeptical eye, the fickleness of rankings comes into sharp focus. As an Oxford Alum, there’s an odd thrill of vindication seeing Oxford sitting pretty once again this year atop the Times Higher Education World Rankings (especially, perhaps, as we’re standing on the shoulders of Cambridge). Yet while Oxford is undoubtedly home to some of the smartest, most globally influential people I know, its climb to the top of the THE world rankings occurred in 2016, in part due to the unfortunate murder of a lion on a Zimbabwean nature preserve. When the donations flooded in to support the researchers who cared for Cecil — some £750k (or $1.1m)  — Oxford’s already substantial research income (a critical data point in many rankings) saw an unexpected boost, and the institution over-leapt some of its American rivals. If something so trivial, so tangential, and so temporary can help land an institution on top of the world, is the ranking really telling us anything at all? Herein lies a more vital, a more fundamental question: what are such rankings for?

Oxford

As the rankings spread into the institutions themselves—into their own self-regulation processes, their hiring and enrolment efforts, their spending habits—it behoves academics, institutional leaders, policy makers, and other educational stakeholders to remain cognisant of what we’re trying to achieve. Educational excellence, of course, as the U.K. has been not-so-quietly demonstrating, is not a readily reducible or quantifiable goal.

In my experience tutoring and teaching, giving students grades—numbers on the tops of their essays—does little to foster their ambition and intellectual creativity. Instead it leads to questions about improving scores, and competing with classmates, not about furthering individual understanding. (Never mind that it’s also completely arbitrary: give a student at Oxford a 75%, and they might call their parents to celebrate—unless, of course, it’s a visiting American student, who might be on the phone home to talk about how to proceed with their lawyers to have that changed to a more respectable and expected 95%). While I’m all for assessment and evaluative thinking, quantifying the qualitative seems more often than not to fuel reductiveness. Moreover, as O’Neil demonstrates persuasively, algorithms merely codify the status quo.

With Universities trapped in a cycle of bending and bowing to journalistic whim, and at the mercy of arbitrary events far outside their control, what can be done to break the homogeneity imposed by formalised quantifications of their core missions in teaching and research? To really revolutionize teaching and learning at the postsecondary level, it’s a question the academy needs to confront.

That’s something I’m going to think about tomorrow. [Update: it’s here.]