Article Image

Publications Should Not Be Keys to Specialty Admission

Op-Med is a collection of original essays contributed by Doximity members.

Recently, I gathered with members of my residency program to celebrate the academic achievements of a newly appointed endowed chair in otolaryngology. The night began with an inspiring account of the donor's legacy — a private practice ENT who served as both professor emeritus and former volunteer instructor in otolaryngology. He was recalled by two of his students, both of whom were now well-respected senior attendings and chairs in their own right, for his teaching and dedication to patient care. The night concluded with an equally inspiring celebration of the honoree, centered around her impressive research resume and vision. Her passion for finding clinical truth was evident. I watched and thought, "This is who should be doing research."

Don't get me wrong, every physician should be able to read and understand research. Medicine is an ever-evolving discipline where the right answers are never 100% certain. However, our practice is, in the best case, evidence-based, a formal way of saying that each physician relies on generations of accumulated knowledge. Our predecessors have studied and documented which symptoms correlate with particular diagnoses and which treatments are effective for specific diseases. The best physicians utilize this information to inform their recommendations. For this reason, scientific research and clinical medicine will always go hand in hand. If physicians want to be researchers, then they must publish research that truly adds something worth passing on to future generations of physicians and scientists.

And yet, it seems that publication has become less about communicating generational clinical wisdom and more about an insatiable desire to have one's name appear on PubMed. Though some academic departments have transitioned from research productivity as a metric for promotion, residency and fellowship programs continue to use publications as a way to grant or deny admission into a specialty. This is reflected in a skyrocketing number of medical student publications.

According to NRMP's Charting the Outcomes of the Match for 2020, 2022, and 2024, the average number of publications for matched U.S. MD seniors in highly competitive specialties like dermatology, neurosurgery, orthopaedic surgery, and plastic surgery has consistently risen. In 2024, matched residents in dermatology averaged 27.7 publications, abstracts, or presentations compared to 19.0 for unmatched applicants. The other aforementioned specialties have similarly high average productivity, with matched applicants to neurosurgery and plastic surgery leading the pack, with respective means of 37.4 and 34.7 publications, abstracts, or presentations per student.

Other specialties are not immune to the glut of reported research activity. Successful internal medicine and general surgery applicants tend to have more activity than their unsuccessful counterparts. From 2020 to 2024, the number of average publications, abstracts, and presentations increased each year. On review of the NRMP match data from 2024, pediatrics was the only specialty with a lower mean number of publications among matched applicants compared to unmatched applicants.

Charting the Outcomes bases its report on the applications it receives. Different groups have attempted to reconcile these numbers with actual publication data for matched applicants indexed on PubMed. Wadhwa et. al. noted that the number of true publications for neurosurgery in 2018 were significantly less than the number listed on Charting the Outcomes; Adeyeri et. al. found the same was true of 2020 orthopaedic surgery applicants. This indicates that either applicants are reporting a very high number of posters and abstracts rather than publications, or that they are reporting a large number of projects that are submitted for publication but ultimately denied and abandoned. In either case, students appear to be exaggerating their research productivity in a way that seems unlikely to reflect their success in residency.

There are many ways to define resident success. Training is a complex and multifaceted process that requires a combination of interpersonal skills, clinical intelligence, diligence, endurance, and critical thinking. Research is one way to demonstrate these qualities, and many programs aspire to lead the nation in grants and research funding or to advance their fields through innovation and discovery.

Donley et. al. and Gutowski et. al. demonstrated that higher numbers of first-author medical school publications are associated with higher numbers of residency first-author publications, so there may be merit in choosing applicants with higher reported publications to boost the research productivity of your residents. However, Wang et. al. noted that higher numbers of publications were associated with a lower average number of citations, highlighting that increasing the quantity of publications does not necessarily advance a department's reputation or goals for quality research.

The question then becomes whether there are alternative ways to measure and predict clinical success for medical students, residents, and attending physicians, and whether research productivity should serve as the gatekeeper for competitive residencies and fellowships. Our current model incentivizes quick projects and undervalues more comprehensive research and basic science, which tends to take more time. It also frequently requires students in lower-resourced institutions to take additional years of research to bolster their production, lengthening an already long and expensive pathway to a paying career.

Some potential alternatives for evaluating residency and fellowship applicants could include limiting applicants to describing 3-5 impactful projects, which may include research projects, publications, leadership initiatives, service work, or advocacy efforts. Programs could request that applicants provide the average impact factor of their publications to incentivize fewer publications to higher-impact journals over a larger number of publications to low-impact journals.

Another option is for programs to use a standardized scoring system to evaluate applicants, which includes research as one category, with equal weight given to scores for leadership, service, and humanism. Finally, we could normalize the application of scientific thought as a requirement –– like asking applicants to critically review research or list clinical scenarios or questions that piqued their investigative curiosity, rather than requiring high numbers of completed projects.

Brilliant physician scientists do exist. I saw one receive an endowed chair just a few weeks ago. But like the chair's namesake, most physicians go on to practice clinical medicine outside of an academic center. When research productivity and publication numbers become the gatekeeper for clinical training, we risk both incorrectly assessing applicants and incentivizing less rigorous research. A better model would recognize research as one of many ways to demonstrate intellectual rigor, while valuing equally the leadership, service, and humanism that sustain our profession.

What has been your experience with academic and practicing medicine? Share in the comments.

Dr. Allison Oliva is an otolaryngology resident in Miami, FL. When she's not at work, she enjoys walking along the water with her fiancé, drinking coffee, reading, and watching Miami Hurricanes football. Dr. Oliva was a 2024–2025 Doximity Op-Med Fellow.

Illustration by April Brust


All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med