“You are doing great, keep doing what you are doing and you will excel at this rotation.”
It’s a comment I hear repeatedly when asking for weekly feedback while on rotation. A copy-pasted comment that I am sure has been given time and time again to medical students, but nevertheless, each time I hear it, I leave the hospital with a smile, thinking, This is the block in which I will finally attain the academic enigma that is the “honors” course grade. After all, I did what I needed to do. I worked hard on my patient presentations, came up with assessments and plans for each patient, studied up on my patients while at home, actively showed interest and participated during rounds, and repeatedly sought out feedback from my attendings. Surely this effort will come through in my evaluations … right?
I’m midway through third year and realizing now that this thought pattern is turning into a predictable cycle through each rotation. Each cycle, I click on my clinical evaluation link and see, once again, that I’ve fallen short of the “honors” score that I strived so hard to achieve. In the comment sections, I see statements like “bright and compassionate,” that suggest I show “strong promise” and will become a “fine physician,” only to see my evaluations littered with threes and fours out of five. The only constructive feedback I am given: “Continue to read and build up knowledge.”
Speaking with my classmates, I know many of us are not strangers to this cycle of anxiety, hope, and disappointment throughout third-year clerkships. This has been one of the most transformative and rewarding years of my life, reminding me why I decided to pursue medicine. I am sure most of us don’t miss the preclinical years, filled with the mess of studying pathology slides and biochemical pathways, as we now have the privilege of participating in clinical health care that impacts real people.
Nevertheless, if there’s one thing that I undeniably miss about preclinicals, it’s the objectivity and transparency of grading. I felt much more in control back then, knowing that my grades directly reflected my work ethic, and that any shortcomings could be clearly visualized. I hated memorizing the Krebs cycle, but if I bombed the biochemistry test the next day, at least I would have no one else to blame.
Unfortunately, this kind of objectivity and transparency does not transfer to third-year rotations. Understandably, the grading format contains subjectivity, as it relies on an individual physician’s perception of a student whom they barely have enough time to properly evaluate. Nevertheless, the lack of standardized assessment tools across even physicians from the same institution leads to incredible variation in student grading, with minimal specific feedback. Even now, for many of us, the grading process seems like a perplexing black box, mysteriously pumping out grades without much rationale.
This is a well-recognized issue, as numerous studies have examined clerkship grading methods and found them to be biased, extremely variable across different schools, and widely perceived as unfair and inaccurate. One study found only 38% of students agreed that grading was fair, and another found extreme variations in the number of students attaining honors per institution, from between 2% to 93% of students.
The same level of objectivity as the first two years of medical school is unfeasible in the third and fourth, but the heavy degree to which residency programs rely on clerkship evaluations in order to select and distinguish candidates raises concerns. This issue will only continue to grow with the change of Step 1 to pass/fail, removing perhaps the biggest objective measure previously used to stratify residency candidates, and thus fostering further reliance on core clerkship grades.
There have been attempts by different institutions to adapt their clerkship grading scales to be more focused on student learning and less on formal grades. One institution moved toward pass/fail grading with a focus on quality feedback, a change that led to significant improvements in student perceptions of clerkship grading and mastery of the clerkship. Another institution proposed moving toward a narrative evaluation system centered on narrative description as opposed to numerical scores, with the aim of helping students develop necessary skills rather than striving for arbitrary benchmarks. Yet the likelihood that these changes will spread to all schools in the near future is small.
Based on my experiences, I propose two changes I believe could be effective and relatively simple.
My first suggestion is to offer better explanations of the evaluation and scoring process to faculty members. In one article, a medical student found there was a disconnect between numerical evaluation scores and letter grades, where some physicians were shocked to hear that they had given scores they believed indicated excellence but were actually closer to a B-plus. It’s understandable why physicians may refrain from giving a five out of five on evaluations, as this insinuates perfection in a medical student, whose inherent role is to seek improvement. Still, explaining to faculty what each numerical score converts to could lead to more thoughtful grading.
My second suggestion is to require more rationale for grading and explanations for why a student received a particular score and ways to improve for the next rotation. Most of the comments I have seen, even those that appear constructive, tend to be generic and do not provide tangible ways to improve.
What has been your experience with grading in medical school? Discuss in the comment section.
Star Chen is a third-year medical student at Lewis Katz School of Medicine at Temple University in Philadelphia, PA.
Image by Denis Novikov / Getty