Article Image

The Medical School Rankings Mess: A Lose-Lose for Students and Schools

Op-Med is a collection of original articles contributed by Doximity members.

This year's U.S. News & World Report medical school rankings arrived not with a bang, but a thud of discontent. The conspicuous absence of prior chart-toppers like Harvard and Stanford, coupled with the shift to a nebulous tier-based system, has left prospective students and the medical education community questioning the utility of the once-authoritative and perhaps most well-known medical school ranking system. After delays, surveying and resurveying, tracking and backtracking, the list now arrives shrouded in controversy.

It's tempting, in the face of such upheaval, to dismiss the whole rankings endeavor as a meaningless prestige game. After all, why should I care if a school sits at #1 or #42 on a list that often feels more like a popularity contest than a rigorous assessment of educational quality? Isn't this just a high-stakes game of academic one-upmanship, irrelevant to the real work of training compassionate, skilled physicians? But to write off the rankings entirely is short-sighted. Medical school rankings matter — not because they offer a perfect reflection of reality, but because, in the absence of a viable alternative, they offer an important bulwark for students.

I remember being an undergraduate premedical student myself, just four years ago. Each year, and especially the year before I applied, I would check the rankings with a feeling of intrigue. I knew they weren't the end-all-be-all — with how competitive medical school admissions are I would have been happy to get into any school — but they provided a data point on important questions for me as a student. 

"Which school would best support my research interests? Which school would help me get into an excellent residency program?" For students like me without inherent familial connections or insider knowledge of the reality of each program, the rankings served as a valuable navigational tool. The rankings, for all their flaws, provided a starting point; a map in unfamiliar territory.

While both U.S. News and top institutions have acknowledged flaws in the traditional methodologies, the current situation benefits no one. I believe that there is a better path forward — but it will require schools and U.S. News to sit down and work out a compromise. One that remembers who these rankings are really for — the students.

The heart of the issue lies in the conflicting priorities of stakeholders. U.S. News, in its quest for comprehensive data-driven rankings (and for clicks), has long relied on metrics that, while easily quantifiable, fail to capture the nuances of medical education. As Eric Gertler, U.S. News CEO and executive chair, aptly stated last year, "We know that comparing diverse academic institutions across a common data set is challenging." Yet, their solution — a tier-based system — does little to address these challenges. Lumping 16 institutions into a single "tier 1" and 36 into a "tier 2" provides minimal practical guidance for students, essentially stating that all schools are equal, which is demonstrably false. While every accredited program meets basic standards, the reality is that residency directors — the gatekeepers to specialized training — consider institutional reputation during the match process. By all accounts, matching is a more personal process than programs let on. Matching at a specific place for specific competitive specialties could boil down to whether there's someone at your home school who knows the higher-ups at the program you want to go to and will go to bat for you. Pretending that such hierarchies don't exist is not only naive, but harmful to prospective students — especially those from non-traditional or first-generation backgrounds. This lack of granularity is also detrimental for lesser-known institutions with unique strengths that might appeal to specific students.

On the other side of the equation are elite medical schools. In his announcement of Harvard Medical School’s withdrawal from the rankings, then-Dean George Daley explained, "My concerns … are more philosophical than methodological. … Rankings cannot meaningfully reflect the high aspirations for educational excellence, graduate preparedness, and compassionate and equitable patient care that we strive to foster in our medical education programs." Other school leaders appear to have fewer philosophical qualms and more practical ones. Dean Jameson of the University of Pennsylvania's Perelman School of Medicine said the ranking system is based on "self-reinforcing criteria such as reputation and institutional wealth.”

While these are legitimate concerns, there's also a more cynical point we should recognize. Elite medical schools gain a great advantage without formal rankings; there's no longer a battle for #1 and thus all may lay claim to that title. Further, while Ivies like Harvard or Columbia can afford to forgo the exposure provided by rankings, lesser-known institutions rely on these lists to attract talented students and faculty, secure funding, and elevate their national profiles. By refusing to participate, these prestigious institutions are essentially pulling up the ladder behind them, perpetuating a cycle of inequity.

The proposed alternative from elite schools — directing students to check the schools’ websites — also raises red flags. Just look at the recent affirmative action admissions data debacle in the undergraduate scene to see that "transparency" can mean wildly different things for different institutions. Students have a practical need to create a tailored application list when applying to medical school. In my own application, based on the advice I received from school counselors and online sites, I created a list of "targets," "reach," and "safety" schools and applied to a certain number in each category. This would have been impossible without the data on MCAT and GPA scores and in-state versus out-of-state biases that was made public and included in rankings. With every additional school application costing nearly $50, and the stakes being so high, it's not right to obfuscate data and encourage a shotgun approach. Schools do not have time to read thousands of applications; the obvious reality is many schools use MCAT and GPA scores as a screening or sorting factor to evaluate students.

One reason schools give for directing students to their own websites over external rankings is that rankings can't truly capture the unique goals and education experience offered at each school. However, it's undeniable that most medical schools share a similar set of goals (and website copy) — an emphasis on patient care, research, and learning. And if you go to any student forum online, you will see that medical students across the country use similar educational tools and textbooks to learn, regardless of the unique factors of their school. Still, that doesn’t mean that school choice doesn't matter. As a medical student now, currently enrolled at Harvard, I am incredibly grateful to my institution for certain aspects of my education that, per discussions with peers, are missing from other schools. Early clinical exposure, a dedicated site for longitudinal care during clerkships, health services and support, pass-fail grading in the first two years — these are factors which have been critical in my educational journey that are neither captured in rankings nor readily comparable based on school websites.

The question then arises: If neither the current rankings nor institutional self-reporting adequately serve students, what's the solution? The answer lies in a collaborative, student-centered approach that prioritizes transparency, outcomes, and meaningful metrics. I propose the following as a first step:

1) Embrace a Multi-Dimensional Approach: In the past, U.S. News separated rankings into "Primary Care" and "Research." This is directionally correct, and U.S. News should work with schools to break down rankings into even more dimensions, recognizing that students enter medical school with different aspirations and goals for their careers. This also allows for schools to specialize and choose areas to prioritize.

2) Incorporate Student Voices: Surveys on student well-being, happiness, and satisfaction are simple quantitative ways to reflect many of the points regarding educational experience I made above. In addition, the tragic reality is that medical training is often a pressure cooker, and deteriorating student mental health and even suicides are a stark reminder of the human cost of this system. Yet, finding quantitative data on student support services, attrition rates, or even institutional culture is nearly impossible aside from hushed whispers on online forums. This lack of transparency and incentive to sweep such stories under the rug perpetuates a culture of silence and prevents students from making informed choices about their well-being. By sharing data, we can begin to take this issue seriously. 

3) Choose Input and Output Metrics that Matter: We need to move beyond simplistic metrics like government funding and incorporate factors that students care about, like residency Match rates and alumni career outcomes. While metrics like MCAT and GPA are far from perfect, such metrics need to be available for students for practical purposes like applications. Plus, students want to be surrounded by capable peers who they can learn from.

4) Incentivize Continued Growth: Develop a system with weighted metrics that rewards institutions for innovation in areas that schools claim to care about, like curriculum development, health care disparities, and student support services. Focus on measurable outcomes, like the number of alumni now practicing medicine in underserved areas, upward mobility for students, and faculty diversity. 

5) Prioritize Transparency and Verification: All participating schools should publicly disclose the above information through a standardized platform, allowing for easy comparison and third-party verification of data.

This is just a starting point, but I believe such an approach would incentivize institutions to compete on factors that genuinely matter to students and the future of medicine, while fostering a culture of transparency and accountability. The medical school application process is already fraught with stress and uncertainty. Students deserve a system that empowers them with accurate, comprehensive, and readily accessible information.

Rankings reflect on the entire medical community and what we value as we work toward a more just and meritocratic society. The onus is on both U.S. News and medical institutions to move beyond self-preservation and prioritize the needs of those they claim to serve: future physicians. Ultimately, both schools and U.S. News need to put their qualms aside and come to the table. It's time to build a better system — a win-win-win for students, schools, and society at large.

Do you prefer a numbers-based, a tier-based based, or an alternative ranking system? Share your answer in the comments!

Aditya Jain is a medical student at Harvard Medical School. He also is a researcher at the Broad on the applications of artificial intelligence in medicine. When he's not busy with school, he enjoys playing guitar, reading sci-fi, and hiking. He tweets @adityajain_42. Aditya was a 2023–2024 Doximity Op-Med Fellow, and continues as a 2024–2025 Doximity Op-Med Fellow.

Illustration by Diana Connolly

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med