The sutures holding the U.S. health care system together are fraying. Wait times have ballooned. The cost of care is rising. Clinician burnout is palpable. Financial challenges are compounding. And fraud is on the rise. But most worrisome of all: patient trust in physicians and the health care system as a whole has massively eroded. One study from the Massachusetts General Hospital showed a decline in trust from over 70% four years ago to just 40% in 2024. Our patients are increasingly feeling the weight of a slow, inefficient medical system that, for all its advancements, often feels inaccessible when they need it most.
As a medical student, this situation weighs heavily on me. Throughout medical school, I’ve balanced my training with a deep involvement in artificial intelligence (AI) research — attending conferences, publishing papers, engaging with young founders, and witnessing firsthand the audacious ambition that fuels the field. In AI, the Silicon Valley “move fast and break things” ethos remains a guiding paradigm, a stark contrast to the cautious precision that defines traditional medicine. It’s a world driven by a boisterous belief in technology’s ability to reshape, well, everything. And what I see coming next for health care isn’t just incremental change — it’s a potential upheaval, a disruption of a magnitude that I worry many in medicine are still struggling to grasp. To me, a new vision for the future is becoming terrifyingly clear: without a fast and fundamental course correction, the rise of AI-powered do-it-yourself (DIY) medicine is poised to render traditional health care obsolete. Because make no mistake: many in the Valley view the erosion of trust in traditional medicine not just as a problem, but as an opportunity. The engineers and venture capitalists see a fundamentally broken system and believe they have the tools to rebuild it from the ground up, with or without us.
The data paints a clear picture: DIY medicine is becoming attractive to patients. COVID-19 accelerated adoption, with 30%-50% of consumers now comfortable with at-home diagnostics. Quest Diagnostics estimates direct-to-consumer (DTC) testing to be a $2 billion market by 2025, growing at 10% annually. And market research reaffirms increased global spending on OTC and DTC medical care, driven by consumer demand. Companies like BIOHM and Everlywell (currently valued at $3 billion) are thriving because they offer consumers control over their care — or at least the illusion of it.
Beyond diagnostics, the proliferation of DTC telehealth platforms coupled with the ease of ordering prescription medications online has signified a fundamental shift in how patients perceive and access care. New companies have cropped up overnight to treat depression, ADHD, and other conditions, forgoing the traditional structure of PCP referrals. My own specialty of interest, neurology, has seen an explosion of DTC neurotechnology devices and diagnostics with limited oversight. Social media platforms, for better or worse, are now awash in “how-to” guides for managing various ailments. It’s not difficult to understand why patients are turning to such tools. With the aforementioned frustrations patients face with the traditional medicine system, many are willing to choose faster, cheaper, and more convenient care — whether there are better outcomes or not. This groundswell of interest in self-directed care is one that Silicon Valley is poised to capture.
Simultaneously, the capabilities of AI are reaching an inflection point, one that those outside of the field may not fully appreciate. The recent unveiling of OpenAI’s “o3” model sent shockwaves of excitement through the AI research community. The model showed complex, human-level reasoning abilities in many areas, with coding skills in the 99th percentile. The only limitation so far? High cost — a barrier that will not remain for long if Moore’s Law is any indication. Indeed, a recent model released by China’s DeepSeek suggests that once large models are trained, their intelligence can be distilled down into cheaper models which can grow to be even smarter, accelerating timelines.
This isn’t hype; these are fundamental leaps in what machines can understand and accomplish. Slowly, Silicon Valley is becoming more confident in applying these machines to medicine. Specialized medical AI models like “HuatuoGPT,” inspired by the respected ancient Chinese physician Hua Tuo, demonstrate the potential for AI to provide targeted diagnostic and treatment recommendations based on vast medical knowledge graphs. Imagine opening a Zoom call at any time of day with a virtual physician, who will spend as much time as you deem necessary to form a diagnosis. With such powerful technology soon reaching the hands of consumers, it becomes increasingly difficult to justify the added advantage of a traditional physician for routine care.
With this transformative potential on the horizon, the private sector is rushing to capitalize on the opportunity. Hippocratic AI — which recently achieved an over $1 billion valuation — has partnered with NVIDIA to provide $9/hour AI nurses and other AI agents to health care systems. Keyword: agents — they can, theoretically, act and take decisions on their own. Hippocratic is one of an increasingly large number of AI startups focused on health care, showcasing investor confidence in AI’s ability to not just assist, but to automate diagnostic functions. Biofourmis, another well-funded startup, focuses on AI-powered remote patient monitoring — building the infrastructure for a future where care is driven outside the hospital setting and by algorithms.
There are real safety concerns with AI DTC medical care and DIY medicine as a whole. To name a few: Patients may receive inaccurate or incomplete information, or fail to solicit a second opinion when one is warranted. Or, they may misuse the devices they’ve ordered, causing greater health issues down the line. However, I predict that despite these inherent risks, the pressure on our medical system will outpace the ability of regulations to hold. Consider the regulatory landscape: The FDA plays a critical role, but its pace is glacial compared to the speed of AI development. After a year of rapid development, the FDA just recently in January began to offer its draft guidance. Still, these laws struggle to effectively grapple with the dynamic and evolving nature of AI algorithms.
The regulatory arbitrage is further amplified by the fragmented nature of U.S. health care law. State medical boards and licensing requirements often lag years behind technological advancements, creating opportunities for DTC companies to operate in the spaces between regulations. This isn’t about malicious intent, necessarily, but it is about a fundamental difference in priorities: speed and scale versus safety and established protocols. For example, telehealth companies prescribing medications based on online questionnaires face varying levels of scrutiny under state medical practice acts, leading to inconsistencies and potential risks to patient safety. The inherent difficulty in enforcing regulations across state lines and the sheer volume of new AI-powered tools entering the market further strains the regulatory infrastructure. And the ambiguity allows companies to innovate in the gray areas, pushing the boundaries of what’s considered a “medical device.” Is a sophisticated AI a diagnostic tool requiring rigorous pre-market approval, or just an “informational resource”?
While we may worry about the downsides of DIY medicine, it’s hard to throw stones from within a glass house. I recently attempted to schedule a PCP visit for my mother and father, two people who have not seen a clinician in years. Their disillusionment with medicine meant it took significant effort on my part for them to agree to getting care. And yet, after calling a half-dozen practices in my parents’ network, the answer was uniformly disheartening: no new patients accepted for at least three months, and in some cases, not until next year. The status quo is unsustainable. Simply erecting regulatory barriers in an attempt to stifle innovation is a losing battle. The pressure exerted by patient demand and technological advancement will likely outpace the ability — and desire — of regulators to effectively contain the growth of AI-powered DTC health care. As if solely to prove my point, a bill has been introduced to the House to grant AI systems prescription authority — a policy leap that arrived not in years, but within the span of composing these paragraphs.
Instead of resistance, I believe the medical community must engage. Both with Silicon Valley, and by advocating for systemic changes within the health care system. It’s mutually beneficial — the experience and perspective that clinicians have remains invaluable to the Valley, and clinicians will be able to better shape this emerging future from a position at the table. We should embrace, integrate, and learn from beneficial AI tools for our education, training, and practice. Education is key: physicians should gain a practical, working knowledge of AI’s training process, capabilities, and limitations within medical school or through continuous education credits. To conquer apprehensions and fears, clinicians must actively participate in AI development — by providing feedback, advising companies, and running quality assurance pilots at hospitals and within delivery systems. Such proactive involvement also reminds technology companies that we are not inaccessible, hiding in an ivory tower, but ready to work as partners and allies, leveraging new innovations to improve the health care ecosystem.
Above all, the very conditions that have fueled the rise of DIY medicine — high costs, long wait times, and bureaucratic hurdles — must be addressed. The medical community must take on a greater advocate role, pushing for reforms that improve access, affordability, and the overall patient experience. We must rebuild trust and the value of the doctor-patient relationship.
The question is not whether DTC medicine will transform health care, but how. We have a choice: to be swept aside by the tide of technological change, relegated to the role of highly skilled technicians performing procedures dictated by algorithms, or to actively shape the future of medicine, ensuring that technological advancements serve to enhance, rather than replace, the fundamental principles of patient-centered care. The sutures of U.S. health care may be fraying, but with foresight, adaptability, and a willingness to embrace change, we can re-operate and lay down new seams, building a better, more innovative health care system that holds strong for generations to come.
Aditya Jain is a medical student at Harvard Medical School. He also is a researcher at the Broad on the applications of artificial intelligence in medicine. When he's not busy with school, he enjoys playing guitar, reading sci-fi, and hiking. He tweets @adityajain_42. Aditya was a 2023–2024 Doximity Op-Med Fellow, and continues as a 2024–2025 Doximity Op-Med Fellow.
Illustration by Diana Connolly