University rankings have become a staple in many students’ academic paths, mainly helping high schoolers choose the perfect university. There is a wide variety of rankings and some are indeed useful, but the most accessible and prominent ones seem to suffer from fatal flaws that should force them into irrelevance. But the opposite seems to be happening as many universities seek to promote themselves by climbing higher on the list. While this act is problematic in itself, I will instead focus on some of the shared aspects of these rankings that render them myopic, stale, and insignificant.
Consider the three largest global ranking publishers: Quacquarelli Symonds (QS) World University Rankings, Times Higher Education (THE) World University Rankings, and Academic Ranking of World Universities (ARWU). They each have flaws that have been criticized in the past but yet they continue to be the prime source for international students (I am guilty of this as well) to select universities when studying abroad. However, much of the methodology used to determine the order is dependent on the size and wealth of the university. Criteria such as international outlook, research, number of professors, number of students, etc. are largely linked to the size and financial capabilities of the university. In fact, all three of the aforementioned ranking systems do not include any universities that do not meet a certain lower bound for size. This means that the more relevant and critical characteristics for a student such as teaching and student life are not given the same level of importance.
This naturally leads to the problem of time. Most rankings are published annually, which is essentially an unofficial performance review that grades the output of a university produced over a duration of just one year. Not only does this put great pressure on smaller universities to generate quantity over quality, but more importantly, it also means that the list becomes relatively unchanged for decades. Coupled with the previous issue of the emphasis on size, the list devolves into a stale and myopic set of fancy brand-names while ignoring the small gems. Perhaps this is why many of the rankings have branched out into Asian, Latin American, Under 50, etc. lists so that readers can be informed that there are other universities in this world other than Harvard and Cambridge. Granted, they are interesting, but these new rankings are demonstrably pointless as they exhibit the exact same issues as the original world rankings.
There are many more reasons why these systems fail to create meaningful rankings, and there are solutions that could lead to improvements, but the crux of the matter is that the methodologies behind these lists have not undergone any significant changes despite the criticisms. So why is this the case? In essence, these rankings are a by-product of academic politics. All of the criteria are determined, in part, via surveys given to reputable individuals who are in academia, including professors, deans, provosts, etc. Not only does this create the obvious smorgasbord of conflicts of interests, but it also creates room for high-profile academics to exert direct influence over the creation and management of the rankings. Just imagine the backlash and shunning a list will receive if it chooses to exclude Harvard from the top ten universities in the world. In a sense, the existing methodologies have been purposely designed to guarantee the placement of several prestigious, historic, and well-established universities at the top.
I understand that these university rankings are not forced upon anyone and I realize that people are simply utilizing their free will in choosing to seek guidance. However, there comes a point where the pervasiveness and influence of an erroneous system becomes significant enough to warrant an article such as this. If society, the media, university presidents, and even leaders of nations form their vision and policies around the results of these fundamentally flawed university rankings, I will humbly object.