Making Sense of College Rankings: Significance and Weaknesses

making sense college rankingsBy David A. Tomar

For colleges and universities, reputation is everything. In the fiercely competitive and increasingly global higher education sector, perception is nearly as important as performance. With more than 183 million college students hitting the books across the globe, unnumbered eyes will behold countless college ranking publications.

These publications—which academics take special pleasure in deriding and universities take particular pains to satisfy—are a major driver of endowment decisions, enrollment figures, and employment opportunities. Rankings play a direct role in how aspiring college students select schools, how researchers build reputations, and how colleges position themselves in the marketplace.

This gives the leading rankers in the game considerable power in shaping student decisions, university priorities, and even national policy-orientation. To wit:

Schools landing in the U.S. News and World Report’s Top 25 will experience a 6-10% increasing in the volume of incoming applications.
Schools making Princeton Review’s Top 20 Best Overall Academic Experience will see a 3.2% surge in applications.
And colleges that move just one percentage point upward in U.S. News rankings will see a corresponding 1% increase in applications received.
These figures incentivize colleges to manipulate metrics such as such as selectivity, class rank, and standardized testing outcomes in order to game the rankings.

Such manipulation can also have the effect of making schools less accessible to low income applicants. According to Salon, rankers like U.S. News “glamorize…selectivity, which creates a culture of exclusion that shuns low-income students the hardest.”
But such concerns are overshadowed by PR interests. In a 2006 survey of global university leaders, nearly 50% of respondents said they used their institutional rank for publicity purposes.

Moreover, a reported 68% of respondents used these rankings as a management tool to bring about strategic organizational, managerial or academic change, including embedding rankings in “target agreements” with faculty and other personnel.
This orientation has contributed directly to the academic “arms race” whereby institutions and governments are locked in battle over “the quest for ever increasing resources.” This has had the effect of widening the gap between elite institutions and mass higher education, contributing to deeper socio-academic stratification.

These economic implications explain why rankings even have the power to influence national policy orientation. According to a New York Times article on the Shanghai Ranking, “Indonesia, Malaysia, South Korea and Taiwan announced programs to lift at least one university into the top 100; Nigeria pledged to put two universities in the top 200.”

These quick facts merely scratch the surface. Clearly, though, there’s a lot riding on college rankings.

But can we trust them? Do any of these rankings truly measure what they claim to? Can you judge your likely educational experience by using these rankings?

Well, that depends. No one ranking system is perfect. Each is vulnerable to critique and each employs a methodology that can be readily deconstructed and parsed for flaws. But if you understand these methodologies, if you recognize these flaws, and if you know how to distill the information that these rankings do have to offer, you may be able to learn quite a bit.

Whatever objections persist—and they are numerous—annual college ranking releases are stitched into the fabric of the university hierarchy. This means that you are better off learning how to navigate them than you are ignoring them.

Each year, the major players release their annual college ranking lists. The occasion of each release is marked by anticipation and media saturation. In the aftermath of each release, the winners celebrate their accomplishment, the losers issue apologies or rationalizations, and critics from all corners of academia scrutinize the rankers themselves.

In the face of this critique—and alongside the effort that colleges extend to perform well in these annual rankings—an industry of rankers battle it out to present what each sees as the most objective, meaningful, and engaging way to compare colleges.

The ranking game is dominated by a few major players, as well as a bevy of new entrants and innovators. In the coming months, we’ll take a closer look at each of these players, their ranking methodologies, and some of the leading points of critique facing each of them. The goal is to create a growing set of resources on how to read, interpret and get the most out of each of these ranking systems both in light of their strengths and in spite of their flaws.

But first, let’s get the lay of the land.

Hereafter, we offer a basic primer on college rankings, including a brief history of ranking, a glance at the major players in the game today, a look at the criteria driving the leading rankings, and a run-down of the leading points of criticism against them.

Significance of Rankings

Why is this market so competitive and expansive? The short answer is that many students, parents, colleges and employers all take college rankings seriously and base important decisions on these rankings.

While we can debate the accuracy, reliability, or trustworthiness of rankings, their popularity is beyond dispute. To wit, the day that U.S. News & World Report issued its Best Colleges rankings for 2014, its website attracted 2.6 million unique visitors and 18.9 million page views.

And people aren’t just looking. They’re making active, life-altering decisions based on this information. According to a 2014 report from the American Educational Research Association, students do make critical application decisions based on these rankings. For instance, schools landing in the U.S. News and World Report’s Top 25 will experience a 6-10% increasing in the volume of incoming applications. Schools making Princeton Review’s Top 20 Best Overall Academic Experience will see a 3.2% surge in applications. And colleges that move just one percentage point upward in U.S. News rankings will see a corresponding 1% increase in applications received.

In spite of the considerable academic criticism aimed at them, college rankings make a lot of noise, and college students hear this noise as they send applications out into the world. Rankings have a direct impact on student decisions regardless of their empirical reliability.

But students aren’t the only ones basing important decisions on ranking indicators. Institutions of higher learning are quite aware of the influence that rankings have on student applications and equally aware of the factors that enter into such rankings. This means that, regardless of how well a given ranking actually measures academic quality or student experience, most universities feel a certain pressure to compete.

An article in Christian Science Monitor notes that many administrators recognize a connection between rankings and the caliber of student their respective universities attract. The importance of reputation looms large, particularly when it comes to attracting international students who might lack informational resources beyond these rankings. Leading historian of American education and Harvard professor emerita Patricia Albjerg Graham observed that “rankings have become particularly important to college and university administrators, who are anxious for their school to rise in the rankings. They believe that such as rise will bring more and better prepared students.”

Of course, it isn’t merely the pressure to look good that drives universities. It is the equal pressure to avoid looking bad. College rankings hold such influence on the public’s perception that universities who slide in the rankings often find themselves compelled to rationalize such occurrences to their various publics. With the release of this year’s U.S. News rankings, for instance, New York University saw a drop in the standings. In response, Peter Henry, dean of the university’s Stern School of Business, issued a personal email apology to the university’s students, taking responsibility for the failure to submit certain critical data to the rankers. He promised to tighten procedures and avoid such lapses in the future.

Henry’s mea culpa underscores the pressure that colleges are under to at least participate in the ranking game. But this pressure extends even beyond what colleges experience. On the international scale, rankers like Shanghai have had a direct impact not just on the way that global universities make decisions but on the priorities that national governments emphasize in higher education.

According to the New York Times, some nations have even undertaken educational initiatives specifically designed to play to the Shanghai Ranking. For instance, “Indonesia, Malaysia, South Korea and Taiwan announced programs to lift at least one university into the top 100; Nigeria pledged to put two universities in the top 200. The ranking’s biomedical bias makes it particularly influential in the developing world, where science, technology, engineering and math are seen as holding the key to economic prosperity.”

This suggests that rankings carry an influence that permeates higher education at every level, driving student decisions, college priorities, and even nationwide initiatives. Of course, this is merely a glimpse of the role that rankings have come to play in the higher education ecosystem, but we can already get a sense of their enormous power.

The Trouble with Ranking

Now that you understand a bit of what goes into each of these rankings—and how distinct they are from one another—you might have a greater appreciation for some of the inherent challenges to the process.

Each ranking system has strengths, and each has weaknesses. Understanding and recognizing these strengths and weaknesses can truly illuminate the value of each ranking approach. Know what to look for (and know what you are unlikely to find), and you can learn quite a bit about a given college or university.

Ultimately, our critique is aimed not at discrediting the leading rankers but at identifying areas of need in the college ranking space and, more broadly, providing a reference for navigating this space.

Tinkering

Richard Holmes, an industry-leading ranking watchdog, administrates a reference site called University Ranking Watch. Here, he provides ongoing critique and analysis of the various university ranking systems in circulation today. He warns that for many ranking sites, methodologies are in somewhat constant flux. Either in response to academic criticism, or in the interest of refining existing strategies, or even with the intent of generating headlines, ranking services have a tendency to revise their strategies on a somewhat regular basis. This, said Holmes, can sometimes bring about notable and even problematic changes in outcome.

In some cases, these changes are modest on their face but can have significant real-world implications. According to the Atlantic, critics have charged U.S. News with imposing slight adjustments to its methodology every few years merely to give observers something new to talk about. This methodological tinkering can produce statistically significant movement in the standings for individual universities that have otherwise undergone no quantitative or material changes. This fluctuation can undermine our collective confidence in a ranking service’s findings, particularly when they diverge significantly from prior findings.

Recent changes to the Shanghai Ranking offer a stark example of this pattern in action. Shanghai derives 50% of its ranking data from a source called Thomson Reuters’ Highly Cited Researchers. This source underwent what would appear to be a dramatic revision over the last few years. Between 2014 and 2016, Shanghai made the gradual (though not gradual enough) transition to the updated source.

Richard Holmes notes that the rankings for 2013 reflected the old list of Highly Cited Researchers while the 2014 and 2015 rankings actually weighted the old and new lists equally. The goal in doing so was to ease the transition toward use of only the new list. All indications are that this was not the smooth transition they had in mind. In fact, the movement of some universities was so substantial that Shanghai might well have jeopardized much of the credibility it has earned to date.

Patterns throughout the Shanghai’s Top 500 illustrate some dramatic ascents and descents. For instance, notes Richard Holmes, Rutgers University dropped from 39 to 96, University of Wisconsin Madison fell from 18 to 28, Virginia Polytechnic Institute plummeted from the range of 79-102 to 301-400.

Notably, Chinese schools fared particularly well in the transition. Whereas zero Chinese colleges were ranked in the top 100 in 2015, two broke through in 2016…and in a big way. Tsinghua University moved from the 101-150 range all the way to 58th spot in just one year. Peking University leapt from the same range up to 71st worldwide.

Overall, 54 Chinese schools made the Top 500 in 2016, which marked the addition of 10 Chinese schools in just one year.

Speaking free from cynicism, one could argue that most formula tinkering is done to prevent manipulation and gaming. Likewise, the web has given us an ever-expanding access to data. This means our ability to analyze these data are constantly improving. Thus, methodological changes are to be expected.

The Shanghai Ranking can at least be defended on the grounds that these changes will ultimately enhance the accuracy of its data analysis. Whether such dramatic one-year trends suggest bias is up for debate.

What is not up for debate is the empirical and psychological impact of such changes. These changes force us to question the trustworthiness of any ranking system that can undergo such dramatic data transformation in just on year. Even putting aside suspicion of bias, there is no statistical continuity between this year’s rankings and last year’s.

These kinds of sudden and dramatic movements can leave administrators scrambling for explanations before the public, their student bodies, their benefactors, and their alumni even when methodology and data sourcing—not academic changes—are at the root of these movements.

Lack of Scientific Rigor

In spite of their influence, most rankings lack proper experimental rigor or academic authority. For instance, in 1997, U.S. News commissioned the National Opinion Research Council to produce a comprehensive critique of its ranking methodology. The Council came to a problematic conclusion, finding that there was little apparent empirical justification for the weighting assigned to different variables. The importance ascribed to each seems to suggest little “defensible empirical or theoretical basis.”

In addition to questionable weighting of criteria, critics argue that the use of reputational metrics is itself inherently unscientific in nature.

QS uses its Global Academic Surveys to accumulate 40% of all the data underlying its `World University Rankings. With an additional 10% of its ranking attributable to Employer survey responses, a full 50% of the QS rankings are based on opinion. Granted, this is the opinion of a decidedly large sample population of respondents—the largest sample population of its kind according to QS—so the findings are not without validity. They are, however, highly vulnerable to empirical inconsistency.

In spite of the numerous steps that QS takes to control its findings, the unparalleled weight that it places on reputational survey responses makes it among the least empirically sound ranking methods in circulation. In fact, there are some startling anecdotal revelations from the data-gathering field that should give us pause. Indications are that its screening methods are somewhat porous, and inherently vulnerable to input from survey respondents that may be biased or downright unqualified.

Bias

Another common critique levied against college ranking is the inherent presence of bias, be it socioeconomic, cultural or even academic in nature.

For instance, the U.S. News & World Report rankings reward those colleges which, among other factors, demonstrate exclusivity. This, argues Salon, is used as a way to measure a college’s worth, which is not only inherently inegalitarian but may also not be the truest indicator of a school’s value.

As Salon phrases it, “these rankings exhibit a callous disregard for college affordability, prioritizing schools that spend more money on flashy amenities rather than scholarships and grants.” The article goes on to argue that “the magazine glamorizes selectivity, which creates a culture of exclusion that shuns low-income students the hardest.”

This ultimately tells us that, regardless of the quality of U.S. News & World Report‘s rankings, it is a highly flawed resource if affordability is a prospective student’s top priority. This underscores one of the major takeaways from this primer on college rankings, that understanding such biases can help you navigate a densely populated ranking sector.

Consider, for instance, the implications of the Shanghai Ranking, which, even its very best, is limited in scope. From their inception in the early 2000s, the Shanghai Rankings were criticized for employing a methodology and a set of metrics that inherently favored scientific programs over humanities and liberal arts.

A quick glance at the criteria that drive the Shanghai Rankings demonstrates a clear and unmistakable focus on the natural and mathematical sciences. Its consideration of Nobel Prizes and Fields Medals is concentrated solely in the sciences and mathematics. The Shanghai Ranking also relies on scholarly citations concentrated entirely in science and mathematics.

Of course, this is not an accidental bias. In the interest of achieving what it views as an empirical set of metrics, Shanghai has dispensed with indicators that it believes can not be objectively defended.

Shanghai Jiao Tong University recognized the limitations in its own rankings in a 2004 study, conceding that “[m]any well-known institutions specialized in humanities and social sciences are ranked relatively low partly because of the imbalances in the production of articles among various subject fields. The Ranking Group tried hard but was unsuccessful in finding additional indicators that are special for humanities and social sciences.”

This academic bias is hardly insidious, but it reinforces an important theme. The better you understand the criteria, methods, and weighting behind each ranking, the better you’ll understand what these rankings mean and whether this meaning is relevant to you and your needs. You’ll need to decide what information can be gleaned from these rankings and what should be discarded.

Arbitrariness

This advice underscores another challenge that college rankers collectively face. The ranking business on the whole struggles with the problem of arbitrariness. As the leading industry examples demonstrate, there are various and considerably different ways that one can approach the question of college ranking. That each of the major rankings relies on its own wholly unique formula for measuring colleges against one another is itself quite revealing.

Though each of the major rankers has gone to great editorial lengths to justify its particular formula for ranking, the sheer variation of possibilities conspires to leave unanswered the one question that really matters to students: “Which college is right for me?”

According to a study by the Higher Education Research Institute (HERI), “How students use measures of institutional quality (e.g., SAT scores, student-faculty ratio, degree completion rates, etc.) is up for debate. While some research suggests students tend to factor in graduates’ labor market outcomes, other analysis suggests that they rarely consider graduation rates and average student debt in their decision.”

Obviously, getting to the bottom of this debate remains a leading preoccupation for ranking enterprises. But the byproduct of this debate is problematic. Namely, many universities have become fixated not on the question of which school is right for you but on which school is right for this year’s rankings. This is a recipe for misplaced priorities, most especially when the metrics and weightings used by rankers diverge from the variables that students consider most important.

Conclusion

Whether you use rankings as a key part of the decision-making process or you are merely curious where your top college choices stack up, you should know exactly what you’re looking at. As this discussion should suggest, it is not enough to simply take your school’s numerical ranking at face value. Dig deeper, consult methodologies, compare rankings and ultimately, remember that you are seeking the college that suits you best. Global standing aside, you must find the school that rates highest where it matters to you. Rankings are merely the starting point.

Speaking of which, this article is also merely the starting point for an ongoing series in which we feature (and scrutinize) both the major players and new entrants in this highly competitive ranking industry. Check the ever-expanding set of links below for a closer look at the major rankers in the game.

 

Conference & Events

GCC & MENA

HigherEd News

User Access

About Us

Follow Us

×

Sign up to keep in touch!

Be the first to hear about latest News in higher education from ArabiaHighered.

Check out our Privacy Policy & Terms of use
You can unsubscribe from email list at any time