Talk Is Cheap: Now Trump Must Deliver On His Healthcare Promises

https://www.forbes.com/sites/robertpearl/2025/06/09/talk-is-cheap-now-trump-must-deliver-on-his-healthcare-promises/

President Donald Trump has made big promises about fixing American healthcare. Now comes the moment that separates talk from action.

With the 2026 midterms fast approaching and congressional attention soon shifting to electoral strategy, the window for legislative results is closing quickly. This summer will determine whether the administration turns promises into policy or lets the opportunity slip away.

Trump and his handpicked healthcare leaders — HHS Secretary Robert F. Kennedy Jr. and FDA Commissioner Dr. Marty Makary — have identified three major priorities: lowering drug prices, reversing chronic disease and unleashing generative AI. Each one, if achieved, would save tens of thousands of lives and reduce costs.

But promises are easy. Real change requires political will and congressional action. Here are three tests that Americans can use to gauge whether the Trump administration succeeds or fails in delivering on its healthcare agenda.

Test No. 1: Have Drug Prices Come Down?

Americans pay two to four times more for prescription drugs than citizens in other wealthy nations. This price gap has persisted for more than 20 years and continues to widen as pharmaceutical companies launch new medications with average list prices exceeding $370,000 per year.

One key reason for the disparity is a 2003 law that prohibits Medicare from negotiating prices directly with drug manufacturers. Although the Inflation Reduction Act of 2022 granted limited negotiation rights, the initial round of price reductions did little to close the gap with other high-income nations.

President Trump has repeatedly promised to change that. In his first term, and again in May 2025, he condemned foreign “free riders,” promising, “The United States will no longer subsidize the healthcare of foreign countries and will no longer tolerate profiteering and price gouging.”

To support these commitments, the president signed an executive order titled “Delivering Most-Favored-Nation (MFN) Prescription Drug Pricing to American Patients.” The order directs HHS to develop and communicate MFN price targets to pharmaceutical manufacturers, with the hope that they will voluntarily align U.S. drug prices with those in other developed nations. Should manufacturers fail to make significant progress toward these targets, the administration said it plans to pursue additional measures, such as facilitating drug importation and imposing tariffs. However, implementing these measures will most likely require congressional legislation and will encounter substantial legal and political challenges.

The pharmaceutical industry knows that without congressional action, there is no way for the president to force them to lower prices. And they are likely to continue to appeal to Americans by arguing that lower prices will restrict innovation and lifesaving drug development.

But the truth about drug “innovation” is in the numbers: According to a study by America’s Health Insurance Plans, seven out of 10 of the largest pharmaceutical companies spend more on sales and marketing than on research and development. And if drugmakers want to invest more in R&D, they can start by requiring peer nations to pay their fair share — rather than depending so heavily on U.S. patients to foot the bill.

If Congress fails to act, the president has other tools at his disposal. One effective step would be for the FDA to redefine “drug shortages” to include medications priced beyond the reach of most Americans. That change would enable compounding pharmacies to produce lower-cost alternatives just as they did recently with GLP-1 weight-loss injections.

If no action is taken, however, and Americans continue paying more than twice as much as citizens in other wealthy nations, the administration will fail this crucial test.

Test No. 2: Did Food Health, Quality Improve?

Obesity has become a leading health threat in the United States, surpassing smoking and opioid addiction as a cause of death.

Since 1980, adult obesity rates have surged from 15% to over 40%, contributing significantly to chronic diseases, including type 2 diabetes, heart disease and multiple types of cancers.

A major driver of this epidemic is the widespread consumption of ultra-processed foods: products high in added sugar, unhealthy fats and artificial additives. These foods are engineered to be hyper-palatable and calorie-dense, promoting overconsumption and, in some cases, addictive eating behaviors.

RFK Jr. has publicly condemned artificial additives as “poison” and spotlighted their impact on children’s health. In May 2025, he led the release of the White House’s Make America Healthy Again (MAHA) report, which identifies ultra-processed foods, chemical exposures, lack of exercise and excessive prescription drug use as primary contributors to America’s chronic disease epidemic.

But while the report raises valid concerns, it has yet to produce concrete reforms.

To move from rhetoric to results, the administration will need to implement tangible policies.

Here are three approaches (from least difficult to most) that, if enacted, would signify meaningful progress:

  • Front-of-package labeling. Implement clear and aggressive labeling to inform consumers about the nutritional content of food products, using symbols to indicate healthy versus unhealthy options.
  • Taxation and subsidization. Impose taxes on unhealthy food items and use the revenue to subsidize healthier food options, especially for socio-economically disadvantaged populations.
  • Regulation of food composition. Restrict the use of harmful additives and limit the total amount of fat and sugar included, particularly for foods aimed at kids.

These measures will doubtlessly face fierce opposition from the food and agriculture industries. But if the Trump administration and Congress manage to enact even one of these options — or an equivalent reform — they can claim success.

If, instead, they preserve the status quo, leaving Americans to decipher nutritional fine print on the back of the box, obesity will continue to rise, and the administration will have failed.

Test No. 3: Are Patients Using Generative AI To Improve Health?

The Trump administration has signaled a strong commitment to using generative AI across various industries, including healthcare. At the AI Action Summit in Paris, Vice President JD Vance made the administration’s agenda clear: “I’m not here this morning to talk about AI safety … I’m here to talk about AI opportunity.”

FDA Commissioner Dr. Marty Makary has echoed that message with internal action. After an AI-assisted scientific review pilot program, he announced plans to integrate generative AI across all FDA centers by June 30.

But internal efficiency alone won’t improve the nation’s health. The real test is whether the administration will help develop and approve GenAI tools that expand clinical access, improve outcomes and reduce costs.

To these ends, generative AI holds enormous promise:

  • Managing chronic disease: By analyzing real-time data from wearables, GenAI can empower patients to better control their blood pressure, blood sugar and heart failure. Instead of waiting months between doctor visits for a checkup, patients could receive personalized analyzes of their data, recommendations for medication adjustments and warnings about potential risk in real time.
  • Improving diagnoses: AI can identify clinical patterns missed by humans, reducing the 400,000 deaths each year caused by misdiagnoses.
  • Personalizing treatment: Using patient history and genetics, GenAI can help physicians tailor care to individual needs, improving outcomes and reducing side effects.

These breakthroughs aren’t theoretical. They’re achievable. But they won’t happen unless federal leaders facilitate broad adoption.

That will require investing in innovation. The NIH must provide funding for next-generation GenAI tools designed for patient empowerment, and the FDA will need to facilitate approval for broad implementation. That will require modernizing current regulations. The FDA’s approval process wasn’t built for probabilistic AI models that rely on continuous application training and include patient-provided prompts. Americans need a new, fit-for-purpose framework that protects patients without paralyzing progress.

Most important, federal leaders must abandon the illusion of zero risk. If American healthcare were delivering superior clinical outcomes, managing chronic disease effectively and keeping patients safe, that would be one thing. But medical care in the United States is far from that reality. Hundreds of thousands of Americans die annually from poorly controlled chronic diseases, medical errors and misdiagnoses.

If generative AI technology remains confined to billing support and back-office automation, the opportunity to transform American healthcare will be lost. And the administration will have failed to deliver on this promise.

When I teach strategy at Stanford’s Graduate School of Business, I tell students that the best leaders focus on a few high-priority goals with clear definitions of success — and a refusal to accept failure. Based on the administration’s own words, grading the administration on these three healthcare tests will fulfill those criteria.

However, with Labor Day just months away, the window for action will soon close. The time for presidential action is now.

AI in medicine: 3 easy questions to separate hype from reality

https://www.linkedin.com/pulse/ai-medicine-3-easy-questions-separate-hype-from-robert-pearl-m-d–ctznc/

Artificial intelligence has long been heralded as a transformative force in medicine. Yet, until recently, its potential has remained largely unfulfilled.

Consider the story of MYCIN, a “rule-based” AI system developed in the 1970s at Stanford University to help diagnose infections and recommend antibiotics. Though MYCIN showed early promise, it relied on rigid, predetermined rules and lacked the flexibility to handle unexpected or complex cases that arise in real-world medicine. Ultimately, the technology of the time couldn’t match the nuanced judgment of skilled clinicians, and MYCIN never achieved widespread clinical use.

Fast forward to 2011, when IBM’s Watson gained global notoriety by besting renowned Jeopardy! champions Ken Jennings and Brad Rutter. Soon after, IBM applied Watson’s vast computing power to healthcare, envisioning it as a gamechanger in oncology. Tasked with synthesizing data from medical literature and patient records at Memorial Sloan Kettering, Watson aimed to recommend tailored cancer treatments.

However, the AI struggled to provide reliable, relevant recommendations—not because of any computational shortcoming but due to inconsistent, often incomplete, data sources. These included imprecise electronic health record entries and research articles that leaned too heavily toward favorable conclusions, failing to hold up in real-world clinical settings. IBM shut down the project in 2020.

Today, healthcare and tech leaders question whether the latest wave of AI tools—including much-heralded generative artificial intelligence models—will deliver on their promise in medicine or become footnotes in history like MYCIN and Watson.

Anthropic CEO Dario Amodei is among the AI optimists. Last month, in a sprawling 15,000-word essay, he predicted that AI would soon reshape humanity’s future. He claimed that by 2026, AI tools (presumably including Anthropic’s Claude) will become “smarter than a Nobel Prize winner.”

Specific to human health, Amodei touted AI’s ability to eliminate infectious diseases, prevent genetic disorders and double life expectancy to 150 years—all within the next decade.

While I admire parts of Amodei’s vision, my technological and medical background makes me question some of his most ambitious predictions.

When people ask me how to separate AI hype from reality in medicine, I suggest starting with three critical questions:

Question 1: Will the AI solution speed up a process or task that humans could eventually complete on their own?

Sometimes, scientists have the knowledge and expertise to solve complex medical problems but are limited by time and cost. In these situations, AI tools can deliver remarkable breakthroughs.

Consider AlphaFold2, a system developed by Google DeepMind to predict how proteins fold into their three-dimensional structures. For decades, researchers struggled to map these large, intricate molecules—the exact shape of each protein requiring years and millions of dollars to decipher. Yet, understanding these structures is invaluable, as they reveal how proteins function, interact and contribute to diseases.

With deep learning and massive datasets, AlphaFold2 accomplished in days what would have taken labs decades, predicting hundreds of proteins’ structures. Within four years, it mapped all known proteins—a feat that won DeepMind researchers a Nobel Prize in Chemistry and is now accelerating drug discovery and medical research.

Another example is a collaborative project between the University of Pittsburgh and Carnegie Mellon, where AI analyzed electronic health records to identify adverse drug interactions. Traditionally, this process took months of manual review to uncover just a few risks. With AI, researchers were able to examine thousands of medications in days, drastically improving speed and accuracy.

These achievements show that when science has a clear path but lacks the speed, tools and scale for execution, AI can bridge the gap. In fact, if today’s generative AI technology existed in the 1990s, ChatGPT estimates it could have sequenced the entire human genome in less than a year—a project that originally took 13 years and $2.7 billion.

Applying this criterion to Amodei’s assertion that AI will soon eliminate most infectious diseases, I believe this goal is realistic. Today’s AI technology already analyzes vast amounts of data on drug efficacy and side effects, discovering new uses for existing medications. AI is also proving effective in guiding the development of new drugs and may help address the growing issue of antibiotic resistance. I agree with Amodei that AI will be able to accomplish in a few years what otherwise would have taken scientists decades, offering fresh hope in the fight against human pathogens.

Question 2: Does the complexity of human genetics make the problem unsolvable, no matter how smart the technology?

Imagine searching for a needle in a giant haystack. When a single answer is hidden within mountains of data, AI can find it much faster than humans alone. But if that “needle” is metallic dust, scattered across multiple haystacks, the challenge becomes insurmountable, even for AI.

This analogy captures why certain medical problems remain beyond AI’s reach. In his essay, Amodei predicts that generative AI will eliminate most genetic disorders, cure cancer and prevent Alzheimer’s within a decade.

While AI will undoubtedly deepen our understanding of the human genome, many of the diseases Amodei highlights as curable are “multifactorial,” meaning they result from the combined impact of dozens of genes, plus environmental and lifestyle factors. To better understand why this complexity limits AI’s reach, let’s first examine simpler, single-gene disorders, where the potential for AI-driven treatment is more promising.

For certain genetic disorders, like BRCA-linked cancers or sickle cell disease that result from a single-gene abnormality, AI can play a valuable role by helping researchers identify and potentially use CRISPR, an advanced gene-editing tool, to directly edit these mutations to reduce disease risk.

Yet even with single-gene conditions, treatment is complex. CRISPR-based therapies for sickle cell, for example, require harvesting stem cells, editing them in a lab and reinfusing them after risky conditioning treatments that pose significant health threats to patients.

Knowing this, it’s evident that the complications would only multiply when editing multifactorial congenital diseases like cleft lip and palate—or complex diseases that manifest later in life, including cardiovascular disease and cancer.

Put simply, editing dozens of genes simultaneously would introduce severe threats to health, most likely exceeding the benefits. Whereas generative AI’s capabilities are accelerating at an exponential rate, gene-editing technologies like CRISPR face strict limitations in human biology. Our bodies have intricate, interdependent functions. This means correcting multiple genetic issues in tandem would disrupt essential biological functions in unpredictable, probably fatal ways.

No matter how advanced an AI tool may become in identifying genetic patterns, inherent biological constraints mean that multifactorial diseases will remain unsolvable. In this respect, Amodei’s prediction about curing genetic diseases will prove only partially correct.

Question 3: Will the AI’s success depend on people changing their behaviors?

One of the greatest challenges for AI applications in medicine isn’t technological but psychological: it’s about navigating human behavior and our tendency toward illogical or biased decisions. While we might assume that people will do everything they can to prolong their lives, human emotions and habits tell a different story.

Consider the management of chronic diseases like hypertension and diabetes. In this battle, technology can be a strong ally. Advanced home monitoring and wearable devices currently track blood pressure, glucose and oxygen levels with impressive accuracy. Soon, AI systems will analyze these readings, recommend diet and exercise adjustments and alert patients and clinicians when medication changes are needed.

But even the most sophisticated AI tools can’t force patients to reliably follow medical advice—or ensure that doctors will respond to every alert.

Humans are flawed, forgetful and fallible. Patients skip doses, ignore dietary recommendations and abandon exercise goals. On the clinician side, busy schedules, burnout and competing priorities often lead to missed opportunities for timely interventions. These behavioral factors add layers of unpredictability and unresponsiveness that even the most accurate AI systems cannot overcome.

And in addition to behavioral challenges, there are biological issues that limit the human lifespan. As we grow older, the protective caps on our chromosomes wear down, causing cells to stop functioning. Our cells’ energy sources, called mitochondria, gradually fail, weakening our bodies until vital organs cease to function. Short of replacing every cell and tissue in our bodies, our organs will eventually give out. And even if generative AI could tell us exactly what we needed to do to prevent these failings, it is unlikely people would consistently follow the recommendations.

For these reasons, Amodei’s boldest prediction—that longevity will double to 150 years within a decade—won’t happen. AI offers remarkable tools and intelligence. It will expand our knowledge far beyond anything we can imagine today. But ultimately, it cannot override the natural and complex limitations of human life: aging parts and illogical behaviors.

In the end, you should embrace AI promises when they build on scientific research. But when they violate biological or psychological principles, don’t believe the hype.

The lifesaving potential of OpenAI’s GPT-4o update

https://www.linkedin.com/pulse/lifesaving-potential-openais-gpt-4o-update-robert-pearl-m-d–ngrmc/

Generative AI tools have made remarkable strides in medicine since the launch of ChatGPT in late 2022. Research has shown that AI, with expert clinician oversight, can significantly enhance diagnostic accuracy, treatment recommendations, and patient monitoring and analysis.

And yet, despite its impressive capabilities and buzz, generative AI is still in the early stages of adoption—both in U.S. healthcare and society.

While almost everyone has heard of genAI, less than a quarter of Americans use it regularly in their personal or professional lives. OpenAI’s newest update, GPT-4o, aims to change that.

In demos released during its spring update, OpenAI showed users engaged in natural, human-like conversations with GPT-4o. The AI interacted with people on their smartphones across video, audio and text, offering real-time spoken responses that sounded eerily human.

In the demo above, AI’s instant answers and friendly voice closely mimic the pace and inflection of normal dialogue. Not coincidently, GPT-4o’s voice sounded remarkably like Scarlett Johansson’s AI character in the movie Her (a decision OpenAI later walked back “out of respect”).

Regardless of the voice coming out of it, GPT-4o is at once awe-inspiring and unsettling. It also represents a significant departure from tech-industry norms. Most tech companies have long avoided creating AI “companions” because of ethical concerns, fearing people could form addictions that exacerbate isolation and loneliness.

What Will GPT-4o’s Rule-Breaking Mean For Medicine?

Critics point out that OpenAI and its peers have yet to resolve a host of major “trust” issues. These include accuracy, privacy, security, bias and misinformation. Of course, these will need to be resolved.

But by creating an AI experience that feels more like talking to a friend, or potentially a doctor, OpenAI has already leapt the tallest hurdle to mass acceptance and adoption. The company understands that humanizing GPT-4o—making it easier and more enjoyable to use—is essential for attracting a wide array of users, including the “late majority” and “laggards” described in Geoffrey Moore’s seminal 1991 book Crossing the Chasm.

Today, 70% of genAI’s non-users are Gen X (ages 44-59) and Baby Boomers (60-78). These generations, which comprise 136 million people, strongly prefer voice and video technologies to typing or touchscreens, and greatly prefer “conversational” AI apps to text-only ones.

They also make up the overwhelming majority of Americans with chronic diseases like diabetes, heart failure and cancer.

GenAI: From Mass Adoption To Mass Empowerment

Once consumers in their 50s, 60s and 70s become comfortable using GPT-4o for everyday tasks, they will then start to rely on it for medical inquiries, too. In a healthcare context, using GPT-4o will closely resemble a video visit or a phone call with a medical professional—two modalities that satisfy the majority of older patients. In fact, 93% of adults over age 70 say they value having telehealth as an option.

With broad adoption, GPT-4o (which will be embedded in next generations of ChatGPT) will empower the sickest Americans to take greater control of their own health, preventing up to hundreds of thousands of premature deaths each year from the complications of chronic disease: heart attacks, strokes, cancer and kidney failure. According to the Centers for Disease Control and Prevention, the effective management of chronic illness would reduce these complications by 30% to 50%, with a similar reduction in mortality.

Generative AI technology contains both the knowledge and ability to help accomplish this:

  • Knowledge. ChatGPT houses an extensive corpus of scientific literature, which includes a diverse and extensive dataset of clinical studies, guidelines from professional medical organizations and research published in top-tier medical journals. In the future, it will be updated with real-time data from medical conferences, health records and up-to-the-minute research, ensuring the AI’s knowledgebase remains both comprehensive and current.
  • Ability. To assist overburdened clinicians, genAI can provide patients with round-the-clock monitoring, insights and advice—empowering them to better diagnose and manage their own health problems. Future generations of these tools will connect with monitoring devices, informing patients about their health status and suggesting medication adjustments or lifestyle changes in clear and friendly terms. These tools will also remind people about preventive screenings and even facilitate testing appointments and transportation. These proactive approaches can reduce complications and improve health outcomes for the 130 million Americans living with chronic diseases.

Combatting Chronic Disease With GPT-4o

To dive deeper into genAI’s difference-making potential, let’s look at two major gaps in chronic disease management: diabetes and hypertension.

Diabetes is the leading cause of kidney failure, a major contributor to heart attacks and responsible for 80% of lower limb amputations. Effective management is possible for nearly all patients and would prevent many of these complications. Yet diabetes is well controlled in only 30% of cases across the United States.

Similarly, effective control of high blood pressure—the leading cause of strokes and a major contributor to kidney failure and heart attacks—is achieved only 55% to 60% of the time. Although some health systems achieve control levels above 90%, the best-available tools and approaches are inconsistently deployed throughout medical practices.

Medical monitoring devices plus AI could play a crucial role in managing hypertension. Imagine a scenario in which a doctor prescribes medication for hypertension and sends the patient home with a wearable device to monitor progress. After a month, the patient has 100 readings—90 normal and 10 elevated. The patient is unsure whether the 90 normal readings indicate all is well or if the 10 elevated ones signal a major problem. The doctor doesn’t have time to review all 100 readings and prefers not to clutter the electronic health record with this data. Instead of the patient waiting four months for the next visit to find out if all is well or not, a generative AI tool could quickly analyze the data (using the doctor’s instructions) and advise whether a medication adjustment is needed or to continue as is.

Today’s generative AI tools aren’t ready to transform medical monitoring or care delivery, but their time is coming. With the technology doubling in power each year, these tools will be 32 times more capable in five years.

Overcoming Barriers To Mass Adoption

Concerns about AI privacy, security and misinformation need to be solved before the majority of Americans will buy in to an AI-empowered future. Progress is being made on those fronts. For example, the leap from GPT-3.5 to GPT-4 saw an 82% reduction in hallucinations, a larger context window and better safety mechanisms.

In addition, clinicians worry about potential income loss if AI leads to healthier patients and reduced demand for medical services. The best solution is to shift from the current fee-for-service reimbursement model (which rewards the volume of medical services) to a value-based, capitated model. This system rewards doctors for preventing chronic diseases and avoiding their most serious complications, rather than simply treating life-threatening medical problems when they arise.

By adopting a pay-for-value approach, medical professionals will embrace genAI as a tool to help prevent and manage diseases (rather than seeing it as a threat to their livelihoods).

The release of GPT-4o shattered the industry norm against creating human-like AI, introducing ethical risks that must be carefully managed. However, the potential for genAI to save thousands of lives each year makes this risk worth taking.

Medical malpractice in the age of AI: Who will bear the blame?

https://www.linkedin.com/pulse/medical-malpractice-age-ai-who-bear-blame-robert-pearl-m-d–g2dec/

More than two-thirds of U.S. physicians have changed their mind about generative AI and now view it as beneficial to healthcare. But as AI grows more powerful and prevalent in medicine, apprehensions remain high among medical professionals.

For the last 18 months, I’ve examined the potential uses and misuses of generative AI in medicine; research that culminated in the new book ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine. Over that time, I’ve seen the concerns of clinicians evolve—from worries about AI’s reliability and, consequently, patient safety to a new set of fears: Who will be held liable when something goes wrong?

From safety to suits: A new AI fear emerges

Technology experts have grown increasingly certain that next-gen AI technologies will prove vastly safer and more reliable for patients, especially under expert human oversight. As evidence, recall that Google’s first medical AI model, Med-PaLM, achieved a mere “passing score” (>60%) on the U.S. medical licensing exam in late 2022. Five months later, its successor, Med-PaLM 2, scored at an “expert” doctor level (85%).

Since then, numerous studies have shown that generative AI increasingly outperforms medical professionals in various tasks. These include diagnosis, treatment decisions, data analysis and even expressing empathy.

Despite these technological advancements, errors in medicine can and will occur, regardless of whether the expertise comes from human clinicians or advanced AI.

Fault lines: Navigating AI’s legal terrain

Legal experts anticipate that as AI tools become more integrated into healthcare, determining liability will come down to whether errors result from AI decisions, human oversight or a combination of both.

For instance, if doctors use a generative AI tool in their offices for diagnosing or treating a patient and something goes wrong, the physician would likely be held liable, especially if it’s deemed that clinical judgement should have overridden the AI’s recommendations.

But the scenarios get more complex when generative AI is used without direct physician oversight. As an example, who is liable when patients rely on generative AI’s medical advice without ever consulting a doctor? Or what if a clinician encourages a patient to use an at-home AI tool for help interpreting wearable device data, and the AI’s advice leads to a serious health issue?

In a working paper, legal scholars from the universities of Michigan, Penn State and Harvard explored these challenges, noting: “Demonstrating the cause of an injury is already often hard in the medical context, where outcomes are frequently probabilistic rather than deterministic. Adding in AI models that are often nonintuitive and sometimes inscrutable will likely make causation even more challenging to demonstrate.”

AI on trial: A legal prognosis from Stanford Law

To get a better handle on the legal risks posed to clinicians when using AI, I spoke with Michelle Mello, professor of law and health policy at Stanford University and lead author of “Understanding Liability Risk from Using Health Care Artificial Intelligence Tools.”

That paper, published earlier this year in the New England Journal of Medicine, is based on hundreds of software-related tort cases and offers insights into the murky waters of AI liability, including how the courts might handle AI-related malpractice cases.

However, Mello pointed out that direct case law on any type of AI model remains “very sparse.” And when it comes to liability implications of using generative AI, specifically, there’s no public record of such cases being litigated.

“At the end of the day, it has almost always been the case that the physician is on the hook when things go wrong in patient care,” she noted but also added, “As long as physicians are using this to inform a decision with other information and not acting like a robot, deciding purely based on the output, I suspect they’ll have a fairly strong defense against most of the claims that might relate to their use of GPTs.”

She emphasized that while AI tools can improve patient care by enhancing diagnostics and treatment options, providers must be vigilant about the liability these tools could introduce. To minimize risk, she recommends four steps.

  1. Understand the limits of AI tools: AI should not be seen as a replacement for human judgment. Instead, it should be used as a supportive tool to enhance clinical decisions.
  2. Negotiate terms of use: Mello urges healthcare professionals to negotiate terms of service with AI developers like Nvidia, OpenAI, Google and others. This includes pushing back on today’s “incredibly broad” and “irresponsible” disclaimers that deny any liability for medical harm.
  3. Apply risk assessment tools: Mello’s team developed a framework that helps providers assess the liability risks associated with AI. It considers factors like the likelihood of errors, the potential severity of harm caused and whether human oversight can effectively mitigate these risks.
  4. Stay informed and prepared: “Over time, as AI use penetrates more deeply into clinical practice, customs will start to change,” Mello noted. Clinicians need to stay informed as the legal landscape shifts.

The high cost of hesitation: AI and patient safety

While concerns about the use of generative AI in healthcare are understandable, it’s critical to weigh these fears against the existing flaws in medical practice.

Each year, misdiagnoses lead to 371,000 American deaths while another 424,000 patients suffer permanent disabilities. Meanwhile, more than 250,000 deaths occur due to avoidable medical errors in the United States. Half a million people die annually from poorly managed chronic diseases, leading to preventable heart attacks, strokes, cancers, kidney failures and amputations.

Our nation’s healthcare professionals don’t have the time in their daily practice to address the totality of patient needs. That’s because the demand for medical services is higher than ever at a time when health insurers—with their restrictive policies and bureaucratic requirements—make it harder than ever to provide excellent care. Generative AI can help.

But it is imperative for policymakers, legal experts and healthcare professionals to collaborate on a framework that promotes the safe and effective use of this technology. As part of their work, they’ll need to address concerns over liability. Ultimately, they must recognize that the risks of not using generative AI to improve care will far outweigh the dangers posed by the technology itself. Only then can our nation reduce the enormous human toll resulting from our current medical failures.

How cognitive biases impact healthcare decisions

https://www.linkedin.com/pulse/how-cognitive-biases-impact-healthcare-decisions-robert-pearl-m-d–ti5qc/?trackingId=eQnZ0um3TKSzV0NYFyrKXw%3D%3D

Day one of the healthcare strategy course I teach in the Stanford Graduate School of Business begins with this question: “Who here receives excellent medical care?”

Most of the students raise their hands confidently. I look around the room at some of the most brilliant young minds in business, finance and investing—all of them accustomed to making quick yet informed decisions. They can calculate billion-dollar deals to the second decimal point in their heads. They pride themselves on being data driven and discerning.

Then I ask, “How do you know you receive excellent care?”

The hands slowly come down and room falls silent. In that moment, it’s clear these future business leaders have reached a conclusion without a shred of reliable data or evidence.

Not one of them knows how often their doctors make diagnostic or technical errors. They can’t say whether their health system’s rate of infection or medical error is high, average or low.

What’s happening is that they’re conflating service with clinical quality. They assume a doctor’s bedside manner correlates with excellent outcomes.

These often false assumptions are part of a multi-millennia-long relationship wherein patients are reluctant to ask doctors uncomfortable but important questions: “How many times have you performed this procedure over the past year and how many patients experienced complications?” “What’s the worst outcome a patient of yours had during and after surgery?”

The answers are objective predictors of clinical excellence. Without them, patients are likely to become a victim of the halo effect—a cognitive bias where positive traits in one area (like friendliness) are assumed to carry over to another (medical expertise).

This is just one example of the many subconscious biases that distort our perceptions and decision-making.

From the waiting room to the operating table, these biases impact both patients and healthcare professionals with negative consequences. Acknowledging these biases isn’t just an academic exercise. It’s a crucial step toward improving healthcare outcomes.

Here are four more cognitive errors that cause harm in healthcare today, along with my thoughts on what can be done to mitigate their effects:

Availability bias

You’ve probably heard of the “hot hand” in Vegas—a lucky streak at the craps table that draws big cheers from onlookers. But luck is an illusion, a product of our natural tendency to see patterns where none exist. Nothing about the dice changes based on the last throw or the individual shaking them.

This mental error, first described as “availability bias” by psychologists Amos Tversky and Daniel Kahneman, was part of groundbreaking research in the 1970s and ‘80s in the field of behavioral economics and cognitive psychology. The duo challenged the prevailing assumption that humans make rational choices.

Availability bias, despite being identified nearly 50 years ago, still plagues human decision making today, even in what should be the most scientific of places: the doctor’s office.

Physicians frequently recommend a treatment plan based on the last patient they saw, rather than considering the overall probability that it will work. If a medication has a 10% complication rate, it means that 1 in 10 people will experience an adverse event. Yet, if a doctor’s most recent patient had a negative reaction, the physician is less likely to prescribe that medication to the next patient, even when it is the best option, statistically.

Confirmation bias

Have you ever had a “gut feeling” and stuck with it, even when confronted with evidence it was wrong? That’s confirmation bias. It skews our perceptions and interpretations, leading us to embrace information that aligns with our initial beliefs—and causing us to discount all indications to the contrary.

This tendency is heightened in a medical system where physicians face intense time pressures. Studies indicate that doctors, on average, interrupt patients within the first 11 seconds of being asked “What brings you here today?” With scant information to go on, doctors quickly form a hypothesis, using additional questions, diagnostic testing and medical-record information to support their first impression.

Doctors are well trained, and their assumptions prove more accurate than incorrect overall. Nevertheless, hasty decisions can be dangerous. Each year in the United States, an estimated 371,000 patients die from misdiagnoses.

Patients aren’t immune to confirmation bias, either. People with a serious medical problem commonly seek a benign explanation and find evidence to justify it. When this happens, heart attacks are dismissed as indigestion, leading to delays in diagnosis and treatment.

Framing effect

In 1981, Tversky and Kahneman asked subjects to help the nation prepare for a hypothetical viral outbreak. They explained that if the disease was left untreated, it would kill 600 people. Participants in one group were told that an available treatment, although risky, would save 200 lives. The other group was told that, despite the treatment, 400 people would die. Although both descriptions lead to the same outcome—200 people surviving and 400 dying—the first group favored the treatment, whereas the second group largely opposed it.

The study illustrates how differently people can react to identical scenarios based on how the information is framed. Researchers have discovered that the human mind magnifies and experiences loss far more powerfully than positive gains. So, patients will consent to a chemotherapy regiment that has a 20% chance of cure but decline the same treatment when told it has 80% likelihood of failure.

Self-serving bias

The best parts about being a doctor are saving and improving lives. But there are other perks, as well.

Pharmaceutical and medical-device companies aggressively reward physicians who prescribe and recommend their products. Whether it’s a sponsored dinner at a Michelin restaurant or even a pizza delivered to the office staff, the intention of the reward is always the same: to sway the decisions of doctors.

And yet, physicians swear that no meal or gift will influence their prescribing habits. And they believe it because of “self-serving bias.”

In the end, it’s patients who pay the price. Rather than receiving a generic prescription for a fraction of the cost, patients end up paying more for a brand-name drug because their doctor—at a subconscious level—doesn’t want to lose out on the perks.

Thanks to the “Sunshine Act,” patients can check sites like ProPublica’s Dollars for Docs to find out whether their healthcare professional is receiving drug- or device-company money (and how much).

Reducing subconscious bias

These cognitive biases may not be the reason U.S. life expectancy has stagnated for the past 20 years, but they stand in the way of positive change. And they contribute to the medical errors that harm patients.

A study published this month in JAMA Internal Medicine found that 1 in 4 hospital patients who either died or were transferred to the ICU had been affected by a diagnostic mistake. Knowing this, you might think cognitive biases would be a leading subject at annual medical conferences and a topic of grave concern among healthcare professionals. You’d be wrong. Inside the culture of medicine, these failures are commonly ignored.

The recent story of an economics professor offers one possible solution. Upon experiencing abdominal pain, he went to a highly respected university hospital. After laboratory testing and observation, his attending doctor concluded the problem wasn’t serious—a gallstone at worst. He told the patient to go home and return for outpatient workup.

The professor wasn’t convinced. Fearing that the medical problem was severe, the professor logged onto ChatGPT (a generative AI technology) and entered his symptoms. The application concluded that there was a 40% chance of a ruptured appendix. The doctor reluctantly ordered an MRI, which confirmed ChatGPT’s diagnosis.

Future generations of generative AI, pretrained with data from people’s electronic health records and fed with information about cognitive biases, will be able to spot these types of errors when they occur.

Deviation from standard practice will result in alerts, bringing cognitive errors to consciousness, thus reducing the likelihood of misdiagnosis and medical error. Rather than resisting this kind of objective second opinion, I hope clinicians will embrace it. The opportunity to prevent harm would constitute a major advance in medical care.

Healthcare CFOs explore M&A, automation and service line cuts in 2024

Companies grappling with liquidity concerns are looking to cut costs and streamline operations, according to a new survey.

Dive Brief:

  • Over three-quarters of healthcare chief financial officers expect to see profitability increases in 2024, according to a recent survey from advisory firm BDO USA. However, to become profitable, many organizations say they will have to reduce investments in underperforming service lines, or pursue mergers and acquisitions.
  • More than 40% of respondents said they will decrease investments in primary care and behavioral health services in 2024, citing disruptions from retail players. They will shift funds to home care, ambulatory services and telehealth that provide higher returns, according to the report.
  • Nearly three-quarters of healthcare CFOs plan to pursue some type of M&A deal in the year ahead, despite possible regulatory threats.

Dive Insight:

Though inflationary pressures have eased since the height of the COVID-19 pandemic, healthcare CFOs remain cognizant of managing costs amid liquidity concerns, according to the report.

The firm polled 100 healthcare CFOs serving hospitals, medical groups, outpatient services, academic centers and home health providers with revenues from $250 million to $3 billion or more in October 2023.

Just over a third of organizations surveyed carried more than 60 days of cash on hand. In comparison, a recent analysis from KFF found that financially strong health systems carried at least 150 days of cash on hand in 2022.

Liquidity is a concern for CFOs given high rates of bond and loan covenant violations over the past year. More than half of organizations violated such agreements in 2023, while 41% are concerned they will in 2024, according to the report. 

To remain solvent, 44% of CFOs expect to have more strategic conversations about their economic resiliency in 2024, exploring external partnerships, options for service line adjustments and investments in workforce and technology optimization.

The majority of CFOs surveyed are interested in pursuing external partnerships, despite increased regulatory roadblocks, including recent merger guidance that increased oversight into nontraditional tie-ups. Last week, the FTC filed its first healthcare suit of the year to block the acquisition of two North Carolina-based Community Health Systems hospitals by Novant Health, warning the deal could reduce competition in the region.

Healthcare CFOs explore tie-ups in 2024

Types of deals that CFOs are exploring, as of Oct. 2023.

https://datawrapper.dwcdn.net/aiFBJ/1

Most organizations are interested in exploring sales, according to the report. Financially struggling organizations are among the most likely to consider deals. Nearly one in three organizations that violated their bond or loan covenants in 2023 are planning a carve-out or divestiture this year. Organizations with less than 30 days of cash on hand are also likely to consider carve-outs.

Organizations will also turn to automation to cut costs. Ninety-eight percent of organizations surveyed had piloted generative AI tools in a bid to alleviate resource and cost constraints, according to the consultancy. 

Healthcare leaders believe AI will be essential to helping clinicians operate at the top of their licenses, focusing their time on patient care and interaction over administrative or repetitive tasks,” authors wrote. Nearly one in three CFOs plan to leverage automation and AI in the next 12 months.

However, CFOs are keeping an eye on the risks. As more data flows through their organizations, they are increasingly concerned about cybersecurity. More than half of executives surveyed said data breaches are a bigger risk in 2024 compared to 2023.

JPM 2024 just wrapped. Here are the key insights

https://www.advisory.com/daily-briefing/2024/01/23/jpm-takeaways-ec#accordion-718cb981ab-item-4ec6d1b6a3

Earlier this month, leaders from more than 400 organizations descended on San Francisco for J.P. Morgan‘s 42nd annual healthcare conference to discuss some of the biggest issues in healthcare today. Here’s how Advisory Board experts are thinking about Modern Healthcare’s 10 biggest takeaways — and our top resources for each insight.

How we’re thinking about the top 10 takeaways from JPM’s annual healthcare conference 

Following the conference, Modern Healthcare  provided a breakdown of the top-of-mind issues attendees discussed.  

Here’s how our experts are thinking about the top 10 takeaways from the conference — and the resources they recommend for each insight.  

1. Ambulatory care provides a growth opportunity for some health systems

By Elizabeth Orr, Vidal Seegobin, and Paul Trigonoplos

At the conference, many health system leaders said they are evaluating growth opportunities for outpatient services. 

However, results from our Strategic Planner’s Survey suggest only the biggest systems are investing in building new ambulatory facilities. That data, alongside the high cost of borrowing and the trifurcation of credit that Fitch is predicting, suggests that only a select group of health systems are currently poised to leverage ambulatory care as a growth opportunity.  

Systems with limited capital will be well served by considering other ways to reach patients outside the hospital through virtual care, a better digital front door, and partnerships. The efficiency of outpatient operations and how they connect through the care continuum will affect the ROI on ambulatory investments. Buying or building ambulatory facilities does not guarantee dramatic revenue growth, and gaining ambulatory market share does not always yield improved margins.

While physician groups, together with management service organizations, are very good at optimizing care environments to generate margins (and thereby profit), most health systems use ambulatory surgery center development as a defensive market share tactic to keep patients within their system.  

This approach leaves margins on the table and doesn’t solve the growth problem in the long term. Each of these ambulatory investments would do well to be evaluated on both their individual profitability and share of wallet. 

On January 24 and 25, Advisory Board will convene experts from across the healthcare ecosystem to inventory the predominant growth strategies pursued by major players, explore considerations for specialty care and ambulatory network development, understand volume and site-of-care shifts, and more. Register here to join us for the Redefining Growth Virtual Summit.  

Also, check out our resources to help you plan for shifts in patient utilization:  

2. Rebounding patient volumes further strain capacity

By Jordan Peterson, Eliza Dailey, and Allyson Paiewonsky 

Many health system leaders noted that both inpatient and outpatient volumes have surpassed pre-pandemic levels, placing further strain on workforces.  

The rebound in patient volumes, coupled with an overstretched workforce, underscores the need to invest in technology to extend clinician reach, while at the same time doubling down on operational efficiency to help with things like patient access and scheduling. 

For leaders looking to leverage technology and boost operational efficiency, we have a number of resources that can help:  

3. Health systems aren’t specific on AI strategies

By Paul Trigonoplos and John League

According to Modern Healthcare, nearly all health systems discussed artificial intelligence (AI) at the conference, but few offered detailed implementation plans and expectations.

Over the past year, a big part of the work for Advisory Board’s digital health and health systems research teams has been to help members reframe the fear of missing out (FOMO) that many care delivery organizations have about AI.  

We think AI can and will solve problems in healthcare. Every organization should at least be observing AI innovations. But we don’t believe that “the lack of detail on healthcare AI applications may signal that health systems aren’t ready to embrace the relatively untested and unregulated technology,” as Modern Healthcare reported. 

The real challenge for many care delivery organizations is dealing with the pace of change — not readiness to embrace or accept it. They aren’t used to having to react to anything as fast-moving as AI’s recent evolution. If their focus for now is on low-hanging fruit, that’s completely understandable. It’s also much more important for these organizations to spend time now linking AI to their strategic goals and building out their governance structures than it is to be first in line with new applications.  

Check out our top resources for health systems working to implement AI: 

4. Digital health companies tout AI capabilities

By Ty Aderhold and John League

Digital health companies like TeladocR1 RCMVeradigm, and Talkspace all spoke out about their use of generative AI. 

This does not surprise us at all. In fact, we would be more surprised if digital health companies were not touting their AI capabilities. Generative AI’s flexibility and ease of use make it an accessible addition to nearly any technology solution.  

However, that alone does not necessarily make the solution more valuable or useful. In fact, many organizations would do well to consider how they want to apply new AI solutions and compare those solutions to the ones that they would have used in October 2022 — before ChatGPT’s newest incarnation was unveiled. It may be that other forms of AI, predictive analytics, or robotic process automation are as effective at a better cost.  

Again, we believe that AI can and will solve problems in healthcare. We just don’t think it will solve every problem in healthcare, or that every solution benefits from its inclusion.  

Check out our top resources on generative AI: 

5. Health systems speak out on denials

By Mallory Kirby

During the conference, providers criticized insurers for the rate of denials, Modern Healthcare reports. 

Denials — along with other utilization management techniques like prior authorization — continue to build tension between payers and providers, with payers emphasizing their importance for ensuring cost effective, appropriate care and providers overwhelmed by both the administrative burden and the impact of denials on their finances. 

  Many health plans have announced major moves to reduce prior authorizations and CMS recently announced plans to move forward with regulations to streamline the prior authorization process. However, these efforts haven’t significantly impacted providers yet.  

In fact, most providers report no decrease in denials or overall administrative burden. A new report found that claims denials increased by 11.99% in the first three quarters of 2023, following similar double digit increases in 2021 and 2022. 

  Our team is actively researching the root cause of this discrepancy and reasons for the noted increase in denials. Stay tuned for more on improving denials performance — and the broader payer-provider relationship — in upcoming 2024 Advisory Board research. 

For now, check out this case study to see how Baptist Health achieved a 0.65% denial write-off rate.  

6. Insurers are prioritizing Star Ratings and risk adjustment changes

By Mallory Kirby

Various insurers and providers spoke about “the fallout from star ratings and risk adjustment changes.”

2023 presented organizations focused on MA with significant headwinds. While many insurers prioritized MA growth in recent years, leaders have increased their emphasis on quality and operational excellence to ensure financial sustainability.

  With an eye on these headwinds, it makes sense that insurers are upping their game to manage Star Ratings and risk adjustment. While MA growth felt like the priority in years past, this focus on operational excellence to ensure financial sustainability has become a priority.   

We’ve already seen litigation from health plans contesting the regulatory changes that impact the bottom line for many MA plans. But with more changes on the horizon — including the introduction of the Health Equity Index as a reward factor for Stars and phasing in of the new Risk Adjustment Data Validation model — plans must prioritize long-term sustainability.  

Check out our latest MA research for strategies on MA coding accuracy and Star Ratings:  

7. PBMs brace for policy changes

By Chloe Bakst and Rachael Peroutky 

Pharmacy benefit manager (PBM) leaders discussed the ways they are preparing for potential congressional action, including “updating their pricing models and diversifying their revenue streams.”

Healthcare leaders should be prepared for Congress to move forward with PBM regulation in 2024. A final bill will likely include federal reporting requirements, spread pricing bans, and preferred pricing restrictions for PBMs with their own specialty pharmacy. In the short term, these regulations will likely apply to Medicare and Medicaid population benefits only, and not the commercial market. 

Congress isn’t the only entity calling for change. Several states passed bills in the last year targeting PBM transparency and pricing structures. The Federal Trade Commission‘s ongoing investigation into select PBMs looks at some of the same practices Congress aims to regulate. PBM commercial clients are also applying pressure. In 2023, Blue Cross Blue Shield of California‘s (BSC) decided to outsource tasks historically performed by their PBM partner. A statement from BSC indicated the change was in part due to a desire for less complexity and more transparency. 

Here’s what this means for PBMs: 

Transparency is a must

The level of scrutiny on transparency will force the hand of PBMs. They will have to comply with federal and state policy change and likely give something to their commercial partners to stay competitive. We’re already seeing this unfold across some of the largest PBMs. Recently, CVS Caremarkand Express Scripts launched transparent reimbursement and pricing models for participating in-network pharmacies and plan sponsors. 

While transparency requirements will be a headache for larger PBMs, they might be a real threat to smaller companies. Some small PBMs highlight transparency as their main value add. As the larger PBMs focus more on transparency, smaller PBMs who rely on transparent offerings to differentiate themselves in a crowded market may lose their main competitive edge. 

PBMs will have to try new strategies to boost revenue

PBM practice of guiding prescriptions to their own specialty pharmacy or those providing more competitive pricing is a key strategy for revenue. Stricter regulations on spread pricing and patient steerage will prompt PBMs to look for additional revenue levers.   

PBMs are already getting started — with Express Scripts reporting they will cut reimbursement for wholesale brand name drugs by about 10% in 2024. Other PBMs are trying to diversify their business opportunities. For example, CVS Caremark’s has offered a new TrueCost model to their clients for an additional fee. The model determines drug prices based on the net cost of drugs and clearly defined fee structures. We’re also watching growing interest in cross-benefit utilization management programs for specialty drugs.  These offerings look across both medical and pharmacy benefits to ensure that the most cost-effective drug is prescribed for patients. 

Check out some of our top resources on PBMs:  

To learn more about some of the recent industry disruptions, check out:   

8. Healthcare disruptors forge on

 By John League

At the conference, retailers such as CVS, Walgreens, and Amazon doubled down on their healthcare services strategies.

Typically, disruptors do not get into care delivery because they think it will be easy. Disruptors get into care delivery because they look at what is currently available and it looks so hard — hard to access, hard to understand, and hard to pay for.  

Many established players still view so-called disruptors as problematic, but we believe that most tech companies that move into healthcare are doing what they usually do — they look at incumbent approaches that make it hard for customers and stakeholders to access, understand, and pay for care, and see opportunities to use technology and innovative business models in an attempt to target these pain points.

CVS, Walgreens, and Amazon are pursuing strategies that are intended to make it more convenient for specific populations to get care. If those efforts aren’t clearly profitable, that does not mean that they will fail or that they won’t pressure legacy players to make changes to their own strategies. Other organizations don’t have to copy these disruptors (which is good because most can’t), but they must acknowledge why patient-consumers are attracted to these offerings.  

For more information on how disruptors are impacting healthcare, check out these resources:  

9. Financial pressures remain for many health systems

By Vidal Seegobin and Marisa Nives

Health systems are recovering from the worst financial year in recent history. While most large health systems presenting at the conference saw their finances improve in 2023, labor challenges and reimbursement pressures remain.  

We would be remiss to say that hospitals aren’t working hard to improve their finances. In fact, operating margins in November 2023 broke 2%. But margins below 3% remain a challenge for long-term financial sustainability.  

One of the more concerning trends is that margin growth is not tracking with a large rebound in volumes. There are number of culprits: elevated cost structures, increased patient complexity, and a reimbursement structure shifting towards government payers.  

For many systems, this means they need to return to mastering the basics: Managing costs, workforce retention, and improving quality of care. While these efforts will help bridge the margin gap, the decoupling of volumes and margins means that growth for health systems can’t center on simply getting bigger to expand volumes.

Maximizing efficiency, improving access, and bending the cost curve will be the main pillars for growth and sustainability in 2024.  

 To learn more about what health system strategists are prioritizing in 2024, read our recent survey findings.  

Also, check out our resources on external partnerships and cost-saving strategies:  

10. MA utilization is still high

By Max Hakanson and Mallory Kirby  

During the conference, MA insurers reported seeing a spike in utilization driven by increased doctor’s visits and elective surgeries.  

These increased medical expenses are putting more pressure on MA insurers’ margins, which are already facing headwinds due to CMS changes in MA risk-adjustment and Star Ratings calculations. 

However, this increased utilization isn’t all bad news for insurers. Part of the increased utilization among seniors can be attributed to more preventive care, such as an uptick in RSV vaccinations.  

In UnitedHealth Group‘s* Q4 earnings call, CFO John Rex noted that, “Interest in getting the shot, especially among the senior population, got some people into the doctor’s office when they hadn’t visited in a while,” which led to primary care physicians addressing other care needs. As seniors are referred to specialty care to address these needs, plans need to have strategies in place to better manage their specialist spend.   

To learn how organizations are bringing better value to specialist care in MA, check out our market insight on three strategies to align specialists to value in MA. (Kacik et al., Modern Healthcare, 1/12)

*Advisory Board is a subsidiary of UnitedHealth Group. All Advisory Board research, expert perspectives, and recommendations remain independent. 

3 huge healthcare battles being fought in 2024

https://www.linkedin.com/pulse/3-huge-healthcare-battles-being-fought-2024-robert-pearl-m-d–aguvc/?trackingId=z4TxTDG7TKq%2BJqfF6Tieug%3D%3D

Three critical healthcare struggles will define the year to come with cutthroat competition and intense disputes being played out in public:

1. A Nation Divided Over Abortion Rights

2. The Generative AI Revolution In Medicine

3. The Tug-Of-War Over Healthcare Pricing American healthcare, much like any battlefield, is fraught with conflict and turmoil. As we navigate 2024, the wars ahead seem destined to intensify before any semblance of peace can be attained. Let me know your thoughts once you read mine.

Modern medicine, for most of its history, has operated within a collegial environment—an industry of civility where physicians, hospitals, pharmaceutical companies and others stayed in their lanes and out of each other’s business.

It used to be that clinicians made patient-centric decisions, drugmakers and hospitals calculated care/treatment costs and added a modest profit, while insurers set rates based on those figures. Businesses and the government, hoping to save a little money, negotiated coverage rates but not at the expense of a favored doctor or hospital. Disputes, if any, were resolved quietly and behind the scenes.

Times have changed as healthcare has taken a 180-degree turn. This year will be characterized by cutthroat competition and intense disputes played out in public. And as the once harmonious world of healthcare braces for battle, three critical struggles take centerstage. Each one promises controversy and profound implications for the future of medicine:

1. A Nation Divided Over Abortion Rights

For nearly 50 years, from the landmark Roe v. Wade decision in 1973 to its overruling by the 2022 Dobbs case, abortion decisions were the province of women and their doctors. This dynamic has changed in nearly half the states.

This spring, the Supreme Court is set to hear another pivotal case, this one on mifepristone, an important drug for medical abortions. The ruling, expected in June, will significantly impact women’s rights and federal regulatory bodies like the FDA.

Traditionally, abortions were surgical procedures. Today, over half of all terminations are medically induced, primarily using a two-drug combination, including mifepristone. Since its approval in 2000, mifepristone has been prescribed to over 5 million women, and it boasts an excellent safety record. But anti-abortion groups, now challenging this method, have proposed stringent legal restrictions: reducing the administration window from 10 to seven weeks post-conception, banning distribution of the drug by mail, and mandating three in-person doctor visits, a burdensome requirement for many. While physicians could still prescribe misoprostol, the second drug in the regimen, its effectiveness alone pales in comparison to the two-drug combo.

Should the Supreme Court overrule and overturn the FDA’s clinical expertise on these matters, abortion activists fear the floodgates will open, inviting new challenges against other established medications like birth control.

In response, several states have fortified abortion rights through ballot initiatives, a trend expected to gain momentum in the November elections. This legislative action underscores a significant public-opinion divide from the Supreme Court’s stance. In fact, a survey published in Nature Human Behavior reveals that 60% of Americans support legal abortion.

Path to resolution: Uncertain. Traditionally, SCOTUS rulings have mirrored public opinion on key social issues, but its deviation on abortion rights has failed to shift public sentiment, setting the stage for an even fiercer clash in years to come. A Supreme Court ruling that renders abortion unconstitutional would contradict the principles outlined in the Dobbs decision, but not all states will enact protective measures. As a result, America’s divide on abortion rights is poised to deepen.

2. The Generative AI Revolution In Medicine

A year after ChatGPT’s release, an arms race in generative AI is reshaping industries from finance to healthcare. Organizations are investing billions to get a technological leg up on the competition, but this budding revolution has sparked widespread concern.

In Hollywood, screenwriters recently emerged victorious from a 150-day strike, partially focused on the threat of AI as a replacement for human workers. In the media realm, prominent organizations like The New York Times, along with a bevy of celebs and influencers, have initiated copyright infringement lawsuits against OpenAI, the developer of ChatGPT.

The healthcare sector faces its own unique battles. Insurers are leveraging AI to speed up and intensify claim denials, prompting providers to counter with AI-assisted appeals.

But beyond corporate skirmishes, the most profound conflict involves the doctor-patient relationship. Physicians, already vexed by patients who self-diagnose with “Dr. Google,” find themselves unsure whether generative AI will be friend or foe. Unlike traditional search engines, GenAI doesn’t just spit out information. It provides nuanced medical insights based on extensive, up-to-date research. Studies suggest that AI can already diagnose and recommend treatments with remarkable accuracy and empathy, surpassing human doctors in ever-more ways.

Path to resolution: Unfolding. While doctors are already taking advantage of AI’s administrative benefits (billing, notetaking and data entry), they’re apprehensive that ChatGPT will lead to errors if used for patient care. In this case, time will heal most concerns and eliminate most fears. Five years from now, with ChatGPT predicted to be 30 times more powerful, generative AI systems will become integral to medical care. Advanced tools, interfacing with wearables and electronic health records, will aid in disease management, diagnosis and chronic-condition monitoring, enhancing clinical outcomes and overall health.

3. The Tug-Of-War Over Healthcare Pricing

From routine doctor visits to complex hospital stays and drug prescriptions, every aspect of U.S. healthcare is getting more expensive. That’s not news to most Americans, half of whom say it is very or somewhat difficult to afford healthcare costs.

But people may be surprised to learn how the pricing wars will play out this year—and how the winners will affect the overall cost of healthcare.

Throughout U.S. healthcare, nurses are striking as doctors are unionizing. After a year of soaring inflation, healthcare supply-chain costs and wage expectations are through the roof. A notable example emerged in California, where a proposed $25 hourly minimum wage for healthcare workers was later retracted by Governor Newsom amid budget constraints.

Financial pressures are increasing. In response, thousands of doctors have sold their medical practices to private equity firms. This trend will continue in 2024 and likely drive up prices, as much as 30% higher for many specialties.

Meanwhile, drug spending will soar in 2024 as weight-loss drugs (costing roughly $12,000 a year) become increasingly available. A groundbreaking sickle cell disease treatment, which uses the controversial CRISPR technology, is projected to cost nearly $3 million upon release.

To help tame runaway prices, the Centers for Medicare & Medicaid Services will reduce out-of-pocket costs for dozens of Part B medications “by $1 to as much as $2,786 per average dose,” according to White House officials. However, the move, one of many price-busting measures under the Inflation Reduction Act, has ignited a series of legal challenges from the pharmaceutical industry.

Big Pharma seeks to delay or overturn legislation that would allow CMS to negotiate prices for 10 of the most expensive outpatient drugs starting in 2026.

Path to resolution: Up to voters. With national healthcare spending expected to leap from $4 trillion to $7 trillion by 2031, the pricing debate will only intensify. The upcoming election will be pivotal in steering the financial strategy for healthcare. A Republican surge could mean tighter controls on Medicare and Medicaid and relaxed insurance regulations, whereas a Democratic sweep could lead to increased taxes, especially on the wealthy. A divided government, however, would stall significant reforms, exacerbating the crisis of unaffordability into 2025.

Is Peace Possible?

American healthcare, much like any battlefield, is fraught with conflict and turmoil. As we navigate 2024, the wars ahead seem destined to intensify before any semblance of peace can be attained.

Yet, amidst the strife, hope glimmers: The rise of ChatGPT and other generative AI technologies holds promise for revolutionizing patient empowerment and systemic efficiency, making healthcare more accessible while mitigating the burden of chronic diseases. The debate over abortion rights, while deeply polarizing, might eventually find resolution in a legislative middle ground that echoes Roe’s protections with some restrictions on how late in pregnancy procedures can be performed.

Unfortunately, some problems need to get worse before they can get better. I predict the affordability of healthcare will be one of them this year. My New Year’s request is not to shoot the messenger.

Sam Altman’s wild year offers 3 critical lessons for healthcare leaders in 2024

https://www.linkedin.com/pulse/sam-altmans-wild-year-offers-3-critical-lessons-2024-pearl-m-d–sj1kc/?trackingId=G7JzFhoHSvuK7BRMyo4gcQ%3D%3D

What a wild end to the year it was for Sam Altman, CEO of OpenAI.

In the span of five white-knuckle days in November, the head of Silicon Valley’s most advanced generative AI company was fired by his board of directors, replaced by not one but two different candidates, hired to lead Microsoft’s AI-research efforts and, finally, rehired back into his CEO position at OpenAI with a new board.

A couple weeks later, TIME selected him “CEO of the Year.” Altman’s saga is more than a tale of tech-industry intrigue. His story provides three valuable lessons for not only aspiring and current healthcare leaders, but also everyone who works with and depends on them.

1. Agree On The Goal, Define It, Then Pursue It Tirelessly

OpenAI’s governance structure presented a unique case: a not-for-profit board, whose stated mission was to protect humanity, found itself overseeing an enterprise valued at more than $80 billion. Predictably, this setup invited conflict, as the company’s humanitarian mission began to clash with the commercial realities of a lucrative, for-profit entity.

But there’s little evidence the bruhaha resulted from Altman’s financial interests. According to IRS filings, the CEO’s salary was only $58,333 at the time of his firing, and he reportedly owns no stock.

While Altman clearly knows the company needs to raise money to fund the creation of ever-more-powerful AI tools, his primary goal doesn’t appear to revolve around maximizing shareholder value or his own wealth.

In fact, I believe Altman and the now-disbanded board shared a common mission: to save humanity. The problem was that the parties were 180 degrees apart when it came to defining how exactly to protect humanity.

Altman’s path to saving humanity involved racing forward as fast as possible. As CEO, he understands generative AI’s potential to radically enhance productivity and alleviate threats like world hunger and climate change.

By contrast, the board feared that breakneck AI development could spiral out of control, posing a threat to human existence. Rather than perceiving AGI (artificial general intelligence) as a savior, much of the board worried that a self-learning system might harm humanity.

This dichotomy pitted a CEO intent on changing the world against a board intent to progress at a cautious, incremental pace.

For Healthcare Leaders: Like OpenAI, American healthcare leaders share a common goal. Be they doctors, insurers or government health agencies, all tout the importance of “value-based care” (VBC), which in general terms, constitutes a financial and care-delivery model based on paying healthcare professionals for the quality of clinical outcomes they achieve rather than the quantity of services they provide. But despite agreeing on the target, leaders differ on what it means and how best to accomplish it. Some think of VBC as “pay for performance,” whereby doctors are paid small incentives based on metrics around prevention and patient satisfaction. These programs fail because clinicians ignore the metrics without incentives and total health suffers.

Other leaders believe VBC means paying insurers a set, annual, upfront fee to provide healthcare to a population of patients. This, too, fails since the insurers turn around and pay doctors and hospitals on a fee-for-service basis, and implement restrictive prior authorization requirements to keep costs down.

Instead of making minor financial tweaks that keep falling short of the goal, leaders who want to transform American medicine must play to win. This will require them to move quickly and completely away from fee-for-service payments (which rewards volume of care) to capitation at the delivery-system level (rewarding superior results by prepaying doctors and hospitals directly without insurers playing the part of middlemen).

Like OpenAI’s former board members, today’s healthcare leaders are playing “not to lose.” They avoid making big changes because they fear the backlash of risk-averse providers. But anything less than all in won’t make a dent given the magnitude of problems. To be effective, leaders must make hard decisions, accept the risks and be confident that once the changes are in place, no one will want to go back to the old ways of doing things.

2. Hire Visionary Leaders Who Inspire Boldly

Many tech-industry commentators have drawn comparisons between Altman and Steve Jobs. Both leaders possess(ed) the rare ability to foresee a better future and turn their visions into reality. And both demonstrate(d) passion for exceeding people’s wants and expectations—not for their own benefit but because they believe in a greater mission and purpose.

Altman and Jobs are what I call visionary leaders, who push their organizations and people to accomplish remarkable outcomes few could’ve believed possible. These types of leaders always challenge conservative boards.

When the OpenAI board realized it’s hard to constrain a CEO like Sam Altman, they fired him.

On day one of that decision, the board might have assumed their action would protect humanity and, therefore, earn the approval of OpenAI’s employees. But the story took a sharp turn when nearly all the company’s 770 workers signed a letter to the board in support of Altman, threatening to quit unless (a) their visionary leader was brought back immediately and (b) the board resigned.

Five days after the battle began, the board was facing a rebellion and had little choice but to back down.

For Healthcare Leaders: The American healthcare system is struggling. Half of Americans say they can’t afford their out-of-pocket expenses, which max out at $16,000 for an insured family. American health is languishing with average life expectancy virtually unchanged since the start of this century. Maternal and infant mortality rates in the U.S. are double what they are in other wealthy nations. And inside medicine, burnout runs rampant. Last year, 71,000 physicians left the profession.

Visionary leadership, often sidelined in favor of the status quo, is crucial for transformative change. In healthcare, boards typically prioritize hiring CEOs with the ability to consolidate market control and achieve positive financial results rather than the ability to drive excellence in clinical outcomes. The consequence for both the providers and recipients of care proves painful.

Like OpenAI’s employees, healthcare professionals want leaders who are genuine, who have the courage to abandon bureaucratic safety in favor of innovative solutions, and who can ignite their passion for medicine. For a growing number of clinicians, the practice of medicine has become a job, not a mission. Without that spark, the future of medicine will remain bleak.

3. Embrace Transformative Technology

OpenAI’s board simultaneously promoted and feared ChatGPT’s potential. In this era of advanced technology, the dilemma of embracing versus restraining innovation is increasingly common.

The board could have shut down OpenAI or done everything in its power to advance the AI. It couldn’t, however, do both. When organizations in highly competitive industries try to strike a safe “balance,” choosing the less-contentious middle ground, they almost always fail to accomplish their goals.

For Healthcare Leaders: Despite being data-driven scientists, healthcare professionals often hesitate to embrace information technologies. Having been burned by tools like electronic healthcare records, which were designed to maximize revenue and not to facilitate medical care, their skepticism is understandable.

But generative AI is different because it has the potential to simultaneously increase quality, accessibility and affordability. This is where technology and skilled leadership must combine forces. It’s not enough for leaders to embrace generative AI. They must also inspire clinicians to apply it in ways that promote collaboration and achieve day-to-day operational efficiency and effectiveness. Without both, any other operational improvements will be incremental and clinical advances minimal at best.

If the boards of directors and other similar decision-making bodies in healthcare want their organizations to lead the process of change, they’ll need to select and support leaders with the vision, courage, and skill to take radical and risky leaps forward. If not, as OpenAI’s narrative demonstrates, they and their organizations will become insignificant and be left behind.

ChatGPT will reduce clinician burnout, if doctors embrace it

Clinician burnout is a major problem. However, as I pointed out in a previous newsletter post, it is not a distinctly American problem.

A recent report from the Commonwealth Fund compared the satisfaction of primary care physicians in 10 high-income nations. Surprisingly, U.S. doctors ranked in the middle, reporting higher satisfaction rates than their counterparts in the U.K., Germany, Canada, Australia and New Zealand.

A Surprising Insight About Burnout

In self-reported surveys, American doctors link their dissatisfaction to problems unique to the U.S. healthcare system: excessive bureaucratic tasks, clunky computer systems and for-profit health insurance. These problems need to be solved, but to reduce clinician burnout we also need to address another factor that negatively impacts doctors around the globe.

Though national healthcare systems may vary greatly in their structure and financing, clinicians in wealthy nations all struggle to meet the ever-growing demand for medical services. And that’s due to the mounting prevalence and complications of chronic disease.

At the heart of the burnout crisis lies a fundamental imbalance between the volume and complexity of patient health problems (demand) and the amount of time that clinicians have to care for them (supply). This article offers a way to reverse both the surge in chronic illnesses and the ongoing clinician burnout crisis.

Supply vs. Demand: Reframing Burnout

When demand for healthcare exceeds doctors’ capacity to provide it, one might assume the easiest solution is to increase the supply of clinicians. But that outcome remains unlikely so long as the cost increases of U.S. medicine continue to outpace Americans’ ability to afford care.

Whenever healthcare costs exceed available funds, policymakers and healthcare commentators look to rationing. The Oregon Medicaid experiment of the 1990s offers a profound reminder of why this approach fails. Starting in 1989, a government taskforce brought patients and providers together to rank medical services by necessity. The plan was to provide only as many as funding would allow. When the plan rolled out, public backlash forced the state to retreat. They expanded the total services covered, driving costs back up without any improvement in health or any relief for clinicians.

Consumer Culture Can Drive Medical Culture

Ultimately, to reduce burnout, we will have to find a way to decrease clinical demand without raising costs or rationing care.

The best—and perhaps only viable—solution is to embrace technologies that empower patients with the ability to better manage their own medical problems.

American consumers today expect and demanded greater control over their lives and daily decisions. Time and again, technology has made this possible.

Take stock trading, for example. Once the sole domain of professional brokers and financial advisors, today’s online trading platforms give individual investors direct access to the market and a wealth of information to make prudent financial decisions. Likewise, technology transformed the travel industry. Sites like Airbnb and Expedia empowered consumers to book accommodations, flights and travel experiences directly, bypassing traditional travel agents.

Technology will soon democratize medical expertise, as well, giving patients unprecedented access to healthcare tools and knowledge. Within the next five to 10 years, as ChatGPT and other generative AI applications become significantly more powerful and reliable, patients will gain the ability to self-diagnose, understand their diseases and make informed clinical decisions.

Today, clinicians are justifiably skeptical of outsized AI promises. But as technology proves itself worthy, clinicians who embrace and promote patient empowerment will not only improve medical outcomes, but also increase their own professional satisfaction.

Here’s how it can happen:

Empowering Patients With Generative AI

In the United States, health systems (i.e., large hospitals and medical groups) that heavily prioritize preventive medicine and chronic-disease management are home to healthier patients and more satisfied clinicians.

In these settings, patients are 30% to 50% less likely to die from heart attack, stroke and colon cancer than patients in the rest of the nation. That’s because their healthcare organizations provide effective chronic-disease prevention programs and assist patients in managing their diabetes, hypertension, obesity and asthma. As a result, patients experience fewer complications like heart attacks, strokes, and cancer.

Most primary care physicians, however, don’t have the time to accomplish this by themselves. According to one study, physicians would need to work 26.7 hours per day to provide all the recommended preventive, chronic and acute care to a typical panel of 2,500 adult patients.

GenAI technologies like ChatGPT can help lessen the load. Soon, they’ll be able to offer patients more than just general advice about their chronic illnesses. They will give personalized health guidance. By connecting to electronic health records (EHR)—even when those systems are spread across different doctors’ offices—GenAI will be able to analyze a patient’s specific health data to provide tailored prevention recommendations. It will be able to remind patients when they need a health screening, and help schedule it, and even sort out transportation. That’s not something Google or any other online platform can currently do.

Moreover, with new tools (like doctor-designed plugins expected in future ChatGPT updates) and data from fitness trackers and home health monitors, GenAI will be capable of not just displaying patient health data, but also interpreting it in the context of each person’s health history and treatment plans. These tools will be able to provide daily updates to patients with chronic conditions, telling them how they’re doing based on their doctor’s plan.

When the patient’s health data show they’re on the right track, there won’t be a need for an office visit, saving time for everyone. But if something seems off—say, blood pressure readings remain excessively high after the start of anti-hypertensive drugs—clinicians will be able to quickly adjust medications, often without the patient needing to come in. And when in-person visits are necessary, GenAI will summarize patient health information so the doctor can quickly understand and act, rather than starting from scratch.

ChatGPT is already helping people make better lifestyle choices, suggesting diets tailored to individual health needs, complete with shopping lists and recipes. It also offers personalized exercise routines and advice on mental well-being.

Another way generative AI can help is by diagnosing and treating common, non-life-threatening medical problems (e.g., musculoskeletal, allergic or viral issues). ChatGPT and Med-PaLM 2 have already demonstrated the capability in diagnosing a range of clinical issues as effectively and safely as most clinicians. Looking ahead, GenAI’s will offer even greater diagnostic accuracy. When symptoms are worrisome, GenAI will alert patients, speeding up definitive treatment. Its ability to thoroughly analyze symptoms and ask detailed questions without the time pressure doctors feel today will eradicate many of our nation’s 400,000 annual deaths from misdiagnosis.

The outcomes—fewer chronic diseases, fewer heart attacks and strokes and more medical problems solved without an office visit—will decrease demand, giving doctors more time with the patients they see. As a result, clinicians will leave the office feeling more fulfilled and less exhausted at the end of the day.

The goal of enhanced technology use isn’t to eliminate doctors. It’s to give them the time they desperately need in their daily practice, without further increasing already unaffordable medical costs. And rather than eroding the physician-patient bond, the AI-empowered patient will strengthen it, since clinicians will have the time to dive deeper into complex issues when people come to the office.

A More Empowered Patient Is Key To Reducing Burnout

AI startups are working hard to create tools that assist physicians with all sorts of tasks: EHR data entry, organizing office duties and submitting prior authorization requests to insurance companies.

These function will help clinicians in the short run. But any tool that fails to solve the imbalance between supply (of clinician time) and demand (for medical services), will be nothing more than a temporary fix.

Our nation is caught in a vicious cycle of rising healthcare demand, leading to more patient visits per day per doctor, producing higher rates of burnout, poorer clinical outcomes and ever-higher demand. By empowering patients with GenAI, we can start a virtuous cycle in which technology reduces the strain on doctors, allowing them to spend more time with patients who need it most. This will lead to better health outcomes, less burnout for clinicians and further decreases in overall healthcare demand.

Physicians and medical societies have the opportunity to take the lead. They’ll have to educate the public on how to use this technology effectively, assist in connecting it to existing data sources and ensure that the recommendations it makes are reliable and safe. The time to start this process is now.