Imagine a town where every household is required to buy fire insurance. When a blaze breaks out, neighbors rush in with buckets, shoulder to shoulder with firefighters, desperate to save what they can. When the flames die, the insurance company surveys the ashes, studying which homes burned and which survived. The company uses that data to build an algorithm that can predict where the next fire will strike. But instead of sharing it to make the town safer, they keep it locked away, charging a fortune for access and using it to raise rates on the townspeople. What began as protection quietly becomes profit. Medicine may be starting to look like that town.
I grew up in the Appalachian Mountains of western North Carolina, in a small town overlooked by the high-purity quartz mine that produces the raw materials needed for manufacturing the microprocessors powering the artificial intelligence revolution. For the families I knew, health care was often a trade-off. A well-child check might mean unpaid time off a parent’s work; a serious illness often meant driving three or four hours to the nearest children’s hospital. And whether it was a routine or unplanned visit, it came with co-pays, co-insurance, and, for many, the silent dread of the bill that would follow. Americans have long paid dearly for the privilege of health care; however, now those encounters come with the largely unrecognized donation of helping someone else build an algorithm.
Today, generative AI systems are being trained on vast troves of health data, including electronic records, lab results, and clinical notes from millions of patients. These models can forecast disease across thousands of conditions, simulate an individual’s health trajectory, and even generate synthetic patient histories to train clinicians or other algorithms. Yet few people recognize that their data live on long after a health care encounter ends, repurposed to serve both academic research and commercial innovation. As a bedside intensivist, I’ve watched those data flow from monitors and lab reports into databases where they become currency. As a clinical informatician, I’ve led the design of workflows optimized to ensure the generation of high-quality data. As a research informatician, I routinely tap into enormous health data repositories for new insights. And as a former worker in that quartz mine back in North Carolina, I can appreciate how the repurposing of patient data for commercial gains can look less like innovation and more like extraction.
The moral stakes rise substantially when the repurposed data come from children. Pediatric records are uniquely enduring, tracing growth, illness, and recovery across years to decades. The infants whose oxygen saturations and heart rates display across ICU monitors today may have those same data points woven into the algorithms that guide their care as adults. In the ICU, I care for children whose vital signs stream into systems they may never know exist. Children cannot consent, and parents often sign forms without realizing how their child’s medical history might be used to shape tools sold by commercial entities. The unsettling irony is that the same data generated in moments of fear and fragility now help build models that may one day assess, insure, or deny them, ultimately turning the vulnerability of childhood into the raw material for profit.
If the field of medicine is the earlier mentioned town, then we can be confident that there will always be fires burning. We will always need new strategies to address disease, uncertainty, and the relentless suffering that comes with mortal existence. The algorithms being built today will undoubtedly prove useful for containing the flames, but only if guardrails are put in place to ensure public good is the priority rather than profit. That means transparency about how patient data are used and by whom, consent that evolves with a child throughout life, and firm limits on the commercialization of pediatric data.
American historian Melvin Kranzberg famously coined the six laws of technology, the first of which is “technology is neither good nor bad; nor is it neutral.” As someone who works at the intersection of care and code, I am far more excited about the promise of our new era of AI than I am worried. But that doesn’t mean I’m naive about the very real threats in this space. The challenge is to ensure that when the next fire comes, the tools we’ve built to fight it don’t burn us and our children, too.
Dr. Chris Horvat is a pediatric intensivist, clinical informatician, and learning health systems researcher in Pittsburgh, PA. His path to medicine began as a contractor in the high-purity quartz mines of western North Carolina, likely making him the only informatician who once helped extract the raw materials powering today’s digital workflows in health care. Dr. Horvat is a 2025–2026 Doximity Op-Med Fellow.
Image by rob dobi / Getty Images




