On a typical clinic day, quality improvement (QI) doesn’t show up as a separate “project.” It’s the five minutes I spend reconciling medications. It’s the checkbox I built into my workflow so an overdue vaccine isn’t missed. It’s the phone call to make sure a fragile patient actually gets a follow‑up appointment instead of disappearing into a scheduling black hole. In real life, QI is woven into patient care and it’s rarely neat.
I hold five board certifications. Three of them, (internal medicine and rheumatology through the American Board of Internal Medicine, ABIM, and pediatrics through the American Board of Pediatrics, ABP) require ongoing maintenance of certification (MOC). That means multiple portals, multiple deadlines, and multiple definitions of what “good doctoring” is supposed to look like on paper. The farther I get into these cycles, the more obvious it becomes: much of what we call MOC is not aligned with the way medicine is actually practiced.
Start with ABIM. In 2024, ABIM governance acknowledged what many of us have been saying for years: the old “Part 4” practice‑improvement requirement asked too much time and “simply wasn’t adding value.” ABIM chose to recognize participation, but not require it as a condition of maintaining certification. That was an implicit admission that a mandatory, board‑defined QI project is a poor proxy for real improvement work. But it also highlights something uncomfortable: even without Part 4, the broader MOC process can still feel like a subscription model that physicians can’t practically opt out of.
ABIM’s fee structure makes the point: $220 per year for the first certificate and $120 for each additional one, with access to the Longitudinal Knowledge Assessment included. ABIM also says your certification status won’t change due to nonpayment, but nonpayment limits access to MOC services. The issue isn’t one fee. It’s the layering: pay, log in, track points, finish assessments — on top of CME, payer reporting, compliance training, and the usual documentation grind.
Now look at pediatrics. Unlike ABIM, ABP continues to require Part 4 (“Improve Professional Practice”) points during the MOC Assessment cycle. Many physicians meet this through health‑system portfolio sponsors, which can be supportive — but it still adds another layer of attestations and deadlines. A commonly described benchmark is 40 Part 4 points per five‑year cycle.
Consider this scenario: It is 8:30 p.m. Last letter of medical necessity filed. Clinic notes still open. On the task list: a Part 4 Plan-Do-Study-Act project. I open the ABP activity catalog. The highest‑value option is 60 (!!!) points, a simulation module. The documentation notes, carefully, what “would” be true “if this were an actual performance improvement module.” It isn’t. It’s a walkthrough applied to patients who don’t exist.
And then there’s the part that really makes me pause: ABP offers “Virtual Quality Improvement with Simulated Data” modules that earn Part 4 credit using mock data. I understand the intent — make something accessible and low‑barrier. But if simulated data can satisfy a requirement that is supposed to reflect performance “in practice,” what exactly are we measuring? Not outcomes. Not system change. We’re measuring completion.
This is why Part 4 so often feels like compliance theater. If the goal is genuine improvement, the gold standard is the messy, real work already happening in clinics and hospitals: preventing medication errors, closing follow‑up gaps, improving infusion safety, reducing diagnostic delays, improving transitions of care. That work depends on teams and data systems — not on an individual physician filling out a module after clinic.
I’m not arguing against accountability. I’m arguing for the right target. Boards are uniquely positioned to assess medical knowledge and clinical judgment. They are not uniquely positioned to run a parallel QI bureaucracy that competes with the QI structures already embedded in health care delivery, regulation, and reimbursement.
A national cross‑specialty survey in Mayo Clinic Proceedings found only 15% of physicians felt MOC activities were worth the time, and 81% described MOC as a burden. The American Board of Medical Specialties' Vision Commission report described similarly low perceived value. Those numbers don’t prove MOC is useless. But they do tell you what the day‑to‑day experience feels like for a lot of physicians: high friction, unclear payoff. When that’s the case, boards should have to show their work.
So what would a more reality‑based approach look like? First, stop using simulated‑data exercises as a stand‑in for “performance in practice.” Keep them as education if they’re helpful, but don’t let them serve as the gatekeeper.
Second, auto‑credit verified improvement work already required elsewhere: institutional safety programs, registry participation, peer review, payer‑required initiatives. Make the default “yes,” not “prove it twice.”
Third, reduce the financial and administrative layering. If the boards’ core product is assessment, then build a system that feels like learning, not toll booths: fewer redundant attestations, fewer parallel requirements, and clearer accounting of what fees fund. ABIM’s retreat from mandatory Part 4 shows change is possible. It should not stop there, and pediatrics should not remain an outlier.
I don’t need simulated QI to care better for patients. I need time, functioning systems, and a certification process that respects the reality of clinical practice — not one that turns improvement into paperwork and professionalism into a line item.
Olga Goodman, MD, is a practicing rheumatologist in IL with five board certifications, including internal medicine, rheumatology, and pediatrics. She writes about professional self-regulation, physician time burdens, and real-world quality improvement.
Animation by Jennifer Bogartz




