How cognitive biases impact healthcare decisions

https://www.linkedin.com/pulse/how-cognitive-biases-impact-healthcare-decisions-robert-pearl-m-d–ti5qc/?trackingId=eQnZ0um3TKSzV0NYFyrKXw%3D%3D

Day one of the healthcare strategy course I teach in the Stanford Graduate School of Business begins with this question: “Who here receives excellent medical care?”

Most of the students raise their hands confidently. I look around the room at some of the most brilliant young minds in business, finance and investing—all of them accustomed to making quick yet informed decisions. They can calculate billion-dollar deals to the second decimal point in their heads. They pride themselves on being data driven and discerning.

Then I ask, “How do you know you receive excellent care?”

The hands slowly come down and room falls silent. In that moment, it’s clear these future business leaders have reached a conclusion without a shred of reliable data or evidence.

Not one of them knows how often their doctors make diagnostic or technical errors. They can’t say whether their health system’s rate of infection or medical error is high, average or low.

What’s happening is that they’re conflating service with clinical quality. They assume a doctor’s bedside manner correlates with excellent outcomes.

These often false assumptions are part of a multi-millennia-long relationship wherein patients are reluctant to ask doctors uncomfortable but important questions: “How many times have you performed this procedure over the past year and how many patients experienced complications?” “What’s the worst outcome a patient of yours had during and after surgery?”

The answers are objective predictors of clinical excellence. Without them, patients are likely to become a victim of the halo effect—a cognitive bias where positive traits in one area (like friendliness) are assumed to carry over to another (medical expertise).

This is just one example of the many subconscious biases that distort our perceptions and decision-making.

From the waiting room to the operating table, these biases impact both patients and healthcare professionals with negative consequences. Acknowledging these biases isn’t just an academic exercise. It’s a crucial step toward improving healthcare outcomes.

Here are four more cognitive errors that cause harm in healthcare today, along with my thoughts on what can be done to mitigate their effects:

Availability bias

You’ve probably heard of the “hot hand” in Vegas—a lucky streak at the craps table that draws big cheers from onlookers. But luck is an illusion, a product of our natural tendency to see patterns where none exist. Nothing about the dice changes based on the last throw or the individual shaking them.

This mental error, first described as “availability bias” by psychologists Amos Tversky and Daniel Kahneman, was part of groundbreaking research in the 1970s and ‘80s in the field of behavioral economics and cognitive psychology. The duo challenged the prevailing assumption that humans make rational choices.

Availability bias, despite being identified nearly 50 years ago, still plagues human decision making today, even in what should be the most scientific of places: the doctor’s office.

Physicians frequently recommend a treatment plan based on the last patient they saw, rather than considering the overall probability that it will work. If a medication has a 10% complication rate, it means that 1 in 10 people will experience an adverse event. Yet, if a doctor’s most recent patient had a negative reaction, the physician is less likely to prescribe that medication to the next patient, even when it is the best option, statistically.

Confirmation bias

Have you ever had a “gut feeling” and stuck with it, even when confronted with evidence it was wrong? That’s confirmation bias. It skews our perceptions and interpretations, leading us to embrace information that aligns with our initial beliefs—and causing us to discount all indications to the contrary.

This tendency is heightened in a medical system where physicians face intense time pressures. Studies indicate that doctors, on average, interrupt patients within the first 11 seconds of being asked “What brings you here today?” With scant information to go on, doctors quickly form a hypothesis, using additional questions, diagnostic testing and medical-record information to support their first impression.

Doctors are well trained, and their assumptions prove more accurate than incorrect overall. Nevertheless, hasty decisions can be dangerous. Each year in the United States, an estimated 371,000 patients die from misdiagnoses.

Patients aren’t immune to confirmation bias, either. People with a serious medical problem commonly seek a benign explanation and find evidence to justify it. When this happens, heart attacks are dismissed as indigestion, leading to delays in diagnosis and treatment.

Framing effect

In 1981, Tversky and Kahneman asked subjects to help the nation prepare for a hypothetical viral outbreak. They explained that if the disease was left untreated, it would kill 600 people. Participants in one group were told that an available treatment, although risky, would save 200 lives. The other group was told that, despite the treatment, 400 people would die. Although both descriptions lead to the same outcome—200 people surviving and 400 dying—the first group favored the treatment, whereas the second group largely opposed it.

The study illustrates how differently people can react to identical scenarios based on how the information is framed. Researchers have discovered that the human mind magnifies and experiences loss far more powerfully than positive gains. So, patients will consent to a chemotherapy regiment that has a 20% chance of cure but decline the same treatment when told it has 80% likelihood of failure.

Self-serving bias

The best parts about being a doctor are saving and improving lives. But there are other perks, as well.

Pharmaceutical and medical-device companies aggressively reward physicians who prescribe and recommend their products. Whether it’s a sponsored dinner at a Michelin restaurant or even a pizza delivered to the office staff, the intention of the reward is always the same: to sway the decisions of doctors.

And yet, physicians swear that no meal or gift will influence their prescribing habits. And they believe it because of “self-serving bias.”

In the end, it’s patients who pay the price. Rather than receiving a generic prescription for a fraction of the cost, patients end up paying more for a brand-name drug because their doctor—at a subconscious level—doesn’t want to lose out on the perks.

Thanks to the “Sunshine Act,” patients can check sites like ProPublica’s Dollars for Docs to find out whether their healthcare professional is receiving drug- or device-company money (and how much).

Reducing subconscious bias

These cognitive biases may not be the reason U.S. life expectancy has stagnated for the past 20 years, but they stand in the way of positive change. And they contribute to the medical errors that harm patients.

A study published this month in JAMA Internal Medicine found that 1 in 4 hospital patients who either died or were transferred to the ICU had been affected by a diagnostic mistake. Knowing this, you might think cognitive biases would be a leading subject at annual medical conferences and a topic of grave concern among healthcare professionals. You’d be wrong. Inside the culture of medicine, these failures are commonly ignored.

The recent story of an economics professor offers one possible solution. Upon experiencing abdominal pain, he went to a highly respected university hospital. After laboratory testing and observation, his attending doctor concluded the problem wasn’t serious—a gallstone at worst. He told the patient to go home and return for outpatient workup.

The professor wasn’t convinced. Fearing that the medical problem was severe, the professor logged onto ChatGPT (a generative AI technology) and entered his symptoms. The application concluded that there was a 40% chance of a ruptured appendix. The doctor reluctantly ordered an MRI, which confirmed ChatGPT’s diagnosis.

Future generations of generative AI, pretrained with data from people’s electronic health records and fed with information about cognitive biases, will be able to spot these types of errors when they occur.

Deviation from standard practice will result in alerts, bringing cognitive errors to consciousness, thus reducing the likelihood of misdiagnosis and medical error. Rather than resisting this kind of objective second opinion, I hope clinicians will embrace it. The opportunity to prevent harm would constitute a major advance in medical care.

ChatGPT will reduce clinician burnout, if doctors embrace it

Clinician burnout is a major problem. However, as I pointed out in a previous newsletter post, it is not a distinctly American problem.

A recent report from the Commonwealth Fund compared the satisfaction of primary care physicians in 10 high-income nations. Surprisingly, U.S. doctors ranked in the middle, reporting higher satisfaction rates than their counterparts in the U.K., Germany, Canada, Australia and New Zealand.

A Surprising Insight About Burnout

In self-reported surveys, American doctors link their dissatisfaction to problems unique to the U.S. healthcare system: excessive bureaucratic tasks, clunky computer systems and for-profit health insurance. These problems need to be solved, but to reduce clinician burnout we also need to address another factor that negatively impacts doctors around the globe.

Though national healthcare systems may vary greatly in their structure and financing, clinicians in wealthy nations all struggle to meet the ever-growing demand for medical services. And that’s due to the mounting prevalence and complications of chronic disease.

At the heart of the burnout crisis lies a fundamental imbalance between the volume and complexity of patient health problems (demand) and the amount of time that clinicians have to care for them (supply). This article offers a way to reverse both the surge in chronic illnesses and the ongoing clinician burnout crisis.

Supply vs. Demand: Reframing Burnout

When demand for healthcare exceeds doctors’ capacity to provide it, one might assume the easiest solution is to increase the supply of clinicians. But that outcome remains unlikely so long as the cost increases of U.S. medicine continue to outpace Americans’ ability to afford care.

Whenever healthcare costs exceed available funds, policymakers and healthcare commentators look to rationing. The Oregon Medicaid experiment of the 1990s offers a profound reminder of why this approach fails. Starting in 1989, a government taskforce brought patients and providers together to rank medical services by necessity. The plan was to provide only as many as funding would allow. When the plan rolled out, public backlash forced the state to retreat. They expanded the total services covered, driving costs back up without any improvement in health or any relief for clinicians.

Consumer Culture Can Drive Medical Culture

Ultimately, to reduce burnout, we will have to find a way to decrease clinical demand without raising costs or rationing care.

The best—and perhaps only viable—solution is to embrace technologies that empower patients with the ability to better manage their own medical problems.

American consumers today expect and demanded greater control over their lives and daily decisions. Time and again, technology has made this possible.

Take stock trading, for example. Once the sole domain of professional brokers and financial advisors, today’s online trading platforms give individual investors direct access to the market and a wealth of information to make prudent financial decisions. Likewise, technology transformed the travel industry. Sites like Airbnb and Expedia empowered consumers to book accommodations, flights and travel experiences directly, bypassing traditional travel agents.

Technology will soon democratize medical expertise, as well, giving patients unprecedented access to healthcare tools and knowledge. Within the next five to 10 years, as ChatGPT and other generative AI applications become significantly more powerful and reliable, patients will gain the ability to self-diagnose, understand their diseases and make informed clinical decisions.

Today, clinicians are justifiably skeptical of outsized AI promises. But as technology proves itself worthy, clinicians who embrace and promote patient empowerment will not only improve medical outcomes, but also increase their own professional satisfaction.

Here’s how it can happen:

Empowering Patients With Generative AI

In the United States, health systems (i.e., large hospitals and medical groups) that heavily prioritize preventive medicine and chronic-disease management are home to healthier patients and more satisfied clinicians.

In these settings, patients are 30% to 50% less likely to die from heart attack, stroke and colon cancer than patients in the rest of the nation. That’s because their healthcare organizations provide effective chronic-disease prevention programs and assist patients in managing their diabetes, hypertension, obesity and asthma. As a result, patients experience fewer complications like heart attacks, strokes, and cancer.

Most primary care physicians, however, don’t have the time to accomplish this by themselves. According to one study, physicians would need to work 26.7 hours per day to provide all the recommended preventive, chronic and acute care to a typical panel of 2,500 adult patients.

GenAI technologies like ChatGPT can help lessen the load. Soon, they’ll be able to offer patients more than just general advice about their chronic illnesses. They will give personalized health guidance. By connecting to electronic health records (EHR)—even when those systems are spread across different doctors’ offices—GenAI will be able to analyze a patient’s specific health data to provide tailored prevention recommendations. It will be able to remind patients when they need a health screening, and help schedule it, and even sort out transportation. That’s not something Google or any other online platform can currently do.

Moreover, with new tools (like doctor-designed plugins expected in future ChatGPT updates) and data from fitness trackers and home health monitors, GenAI will be capable of not just displaying patient health data, but also interpreting it in the context of each person’s health history and treatment plans. These tools will be able to provide daily updates to patients with chronic conditions, telling them how they’re doing based on their doctor’s plan.

When the patient’s health data show they’re on the right track, there won’t be a need for an office visit, saving time for everyone. But if something seems off—say, blood pressure readings remain excessively high after the start of anti-hypertensive drugs—clinicians will be able to quickly adjust medications, often without the patient needing to come in. And when in-person visits are necessary, GenAI will summarize patient health information so the doctor can quickly understand and act, rather than starting from scratch.

ChatGPT is already helping people make better lifestyle choices, suggesting diets tailored to individual health needs, complete with shopping lists and recipes. It also offers personalized exercise routines and advice on mental well-being.

Another way generative AI can help is by diagnosing and treating common, non-life-threatening medical problems (e.g., musculoskeletal, allergic or viral issues). ChatGPT and Med-PaLM 2 have already demonstrated the capability in diagnosing a range of clinical issues as effectively and safely as most clinicians. Looking ahead, GenAI’s will offer even greater diagnostic accuracy. When symptoms are worrisome, GenAI will alert patients, speeding up definitive treatment. Its ability to thoroughly analyze symptoms and ask detailed questions without the time pressure doctors feel today will eradicate many of our nation’s 400,000 annual deaths from misdiagnosis.

The outcomes—fewer chronic diseases, fewer heart attacks and strokes and more medical problems solved without an office visit—will decrease demand, giving doctors more time with the patients they see. As a result, clinicians will leave the office feeling more fulfilled and less exhausted at the end of the day.

The goal of enhanced technology use isn’t to eliminate doctors. It’s to give them the time they desperately need in their daily practice, without further increasing already unaffordable medical costs. And rather than eroding the physician-patient bond, the AI-empowered patient will strengthen it, since clinicians will have the time to dive deeper into complex issues when people come to the office.

A More Empowered Patient Is Key To Reducing Burnout

AI startups are working hard to create tools that assist physicians with all sorts of tasks: EHR data entry, organizing office duties and submitting prior authorization requests to insurance companies.

These function will help clinicians in the short run. But any tool that fails to solve the imbalance between supply (of clinician time) and demand (for medical services), will be nothing more than a temporary fix.

Our nation is caught in a vicious cycle of rising healthcare demand, leading to more patient visits per day per doctor, producing higher rates of burnout, poorer clinical outcomes and ever-higher demand. By empowering patients with GenAI, we can start a virtuous cycle in which technology reduces the strain on doctors, allowing them to spend more time with patients who need it most. This will lead to better health outcomes, less burnout for clinicians and further decreases in overall healthcare demand.

Physicians and medical societies have the opportunity to take the lead. They’ll have to educate the public on how to use this technology effectively, assist in connecting it to existing data sources and ensure that the recommendations it makes are reliable and safe. The time to start this process is now.

How generative AI will change the doctor-patient relationship

https://www.linkedin.com/pulse/how-generative-ai-change-doctor-patient-relationship-pearl-m-d-/?trackingId=sNn87WorSt%2BPg3F0SxKUIw%3D%3D

After decades of “doctor knows best,” the traditional physician-patient relationship is on the verge of a monumental shift. Generative AI tools like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing are poised to give people significantly more power and control—not just over their personal lives and professional tasks, but over their own medical health, as well.

As these tools become exponentially smarter, safer and more reliable (an estimated 32 times more powerful in the next five years), everyday Americans will gain access to unparalleled medical expertise—doled out in easily understandable terms, at any time, from any place.

Already, Google’s Med-PaLM 2 has scored an expert-level 86.5% on the U.S. medical license exam while other AI tools have matched the skill and accuracy of average doctors in diagnosing complex medical diseases.

Soon, AI tools will be able to give patients detailed information about their specific medical problems by integrating with health monitors and electronic medical records (such EHR projects are already underway at Oracle/Cerner and Epic). In time, people will be able to self-diagnose and manage their own diseases as accurately and competently as today’s clinicians.

This newfound expertise will shake the very foundation of clinical practice.

Although public health experts have long touted the concept of clinicians and patients working together through shared decision-making, this rarely happens in practice. Generative AI will alter that reality.

Building on part one of this article, which explained why generative AI constitutes a quantum leap ahead of all the tech that came before it, part two provides a blueprint for strengthening the doctor-patient alliance in the era of generative AI.

Patients Today: Sick And Confused

To understand how generative AI will impact the practice of medicine, it’s best to look closer at the current doctor-patient dynamic.

The relationship has undergone significant evolution. In the past century, patients and doctors held close, enduring relationships, built on trust and a deep understanding of the patient’s individual needs. These bonds were characterized by a strong sense of personal connection, as doctors had the time to listen to their patients’ concerns and provided not only medical treatment but also emotional support.

Today, the doctor-patient relationship remains vitally important, but it has undergone several meaningful changes. While medical advancements have greatly expanded the possibilities for diagnosis and treatment, the relationship itself has suffered from less trust and a more transactional focus. The average visit lasts just 15 minutes, barely enough time to address the patient’s current medical concerns. The doctor’s computer and electronic healthcare record systems sit, quite literally, between doctors and patients. The result is that patients feel rushed and find their medical care increasingly impersonal. Modern healthcare is characterized by time constraints, administrative burdens and a focus on efficiency. This can lead to a sense of impersonality and decreased communication between doctors and patients.

But throughout these changes, one thing has remained constant. The doctor-patient relationship, which dates back more than five millennia, has always existed on an uneven playing field, with patients forced to rely almost entirely on doctors to understand their diseases and what to do about them.

Though patients can and do access the internet for a list of possible diagnoses and treatment options, that’s not the same as possessing medical expertise. In fact, sorting through dozens of online sources—often with conflicting, inaccurate, outdated and self-serving information—proves more confusing than clarifying. Nowhere can web-surfers find personalized and credible advice based on their age, medical history, genetic makeup, current medications and laboratory results.

What’s needed now is modern doctor-patient relationship, one that is strong enough to meet the demands of medicine today and restore the vital, personal and emotional connections of the past.    

Patients Tomorrow: Self-Diagnosing And Confident

In the future, generative AI will alter the doctor-patient dynamic by leveling the playing field.

Already, consumer AI tools can equip users with not just knowledge, but expertise. They allow the average person to create artistic masterpieces, produce hit songs and write code with unimagined sophistication. Next generations will offer a similar ability for patients, even those without a background in science or medicine.

Like a digitized second opinion, generative AI will shrink the knowledge gap between doctors and patients in ways that search engines can’t. By accessing millions of medical texts, peer-reviewed journals and scientific articles, ChatGPT will deliver accurate and unbiased medical expertise in layman’s language. And unlike internet sources, generative AI tools don’t have built-in financial incentives or advertising models that might skew responses.

To help patients and doctors navigate the upcoming era of generative AI, here’s a model for the future of medical practice based on proven approaches in education:  

Introducing The ‘Flipped Healthcare’ Model

The “flipped classroom” can be traced back nearly four decades, but it became popularized in the United States in the early 2000s through the Khan Academy in Northern California.

Students begin the learning process by watching videos and engaging with interactive tools online rather than sitting through traditional lectures. This pre-class preparation (or “homework in advance”) allows people to learn at their own pace. Moreover, it enhances classroom discussions, letting teachers and students dive much deeper into topics than they ever could before. Indeed, students spend time in class applying knowledge and collaborating to solve problems—not merely listening and taking notes.  

The introduction of generative AI opens the door to a similar approach in healthcare. Here’s how that might work in practice:

  1. Pre-Consultation Learning: Before visiting a doctor, patients would use generative AI tools to understand their symptoms or medical conditions. This foundational knowledge would accelerate the diagnostic process and enhance patient understanding. Even in the absence of advanced diagnostic testing (X-rays or bloodwork), this pre-consultation phase allows the patient to understand the questions their clinicians will ask and the steps they will take.
  2. In-Depth Human Interactions: With the patient’s knowledge base already established, consultations will dive deep into proactive health strategies and/or long-term chronic-disease management solutions, rather than having to start at square one. This approach maximizes the time patients and clinicians spend together. It also addresses the reality that at least 50% of patients leave the doctor’s office unsure of what they’ve been told.
  3. Home Monitoring: For the 60% of American patients living with chronic diseases, generative AI combined with wearable monitors will provide real-time feedback, thereby optimizing clinical outcomes. These patients, instead of going in for periodic visits (every three to six months), will obtain daily medical analysis and insights. And in cases where generative AI spots trouble (e.g., health data deviates from the doctor’s expectations), the provider will be able to update medications immediately. And when the patient is doing well, physicians can cancel follow-up visits, eliminating wasted time for all.
  4. Hospital At Home: Inpatient (hospital) care accounts for 30% of all healthcare costs. By continuously monitoring patients with medical problems like mild pneumonia and controllable bacterial infections, generative AI (combined with home monitoring devices and telemedicine access) would allow individuals to be treated in the comfort of their home, safely and more affordably than today.
  5. Lifestyle Medicine: Generative AI would support preventive health measures and lifestyle changes, reducing the overall demand for in-person clinical care. Studies confirm that focusing on diet, exercise and recommended screenings can reduce the deadliest complications of chronic disease (heart attack, stroke, cancer) by 30% or more. Decreasing the need for intensive procedures is the best way to make healthcare affordable and address the projected shortage of doctors and nurses in the future.

The Future: Collaborative Care For Superior Outcomes

The U.S. healthcare model often leaves patients feeling frustrated and overwhelmed. Meanwhile, time constraints placed on doctors lead to rushed consultations and misdiagnoses, which cause an estimated 800,000 deaths and disabilities annually.

The “flipped” approach, inspired by the Khan Academy, leverages the patient expertise that generative AI will create. Following this model will free up clinician time to make the most of every visit. Implementing this blueprint will require improvements in AI technology and an evolution of medical culture, but it offers the opportunity to make the doctor-patient relationship more collaborative and create empowered patients who will improve their health.

Talk with educators at the Khan Academy, and they will tell you how their innovative model results in better-educated students. They’ll also tell you how much more satisfied teachers and students are compared to those working in the traditional educational system. The same can be true for American medicine.

The AI-empowered patient is coming. Are doctors ready?

https://www.linkedin.com/pulse/ai-empowered-patient-coming-doctors-ready-robert-pearl-m-d-/

Artificial intelligence (AI) has long been heralded as an emerging force in medicine. Since the early 2000s, promises of a technological transformation in healthcare have echoed through the halls of hospitals and at medical meetings.

But despite 20-plus years of hype, AI’s impact on medical practice and America’s health remains negligible (with minor exceptions in areas like radiological imaging and predictive analytics).

As such, it’s understandable that physicians and healthcare administrators are skeptical about the benefits that generative AI tools like ChatGPT will provide.

They shouldn’t be. This next generation of AI is unlike any technology that has come before. 

The launch of ChatGPT in late 2022 marked the dawn of a new era. This “large language model” developed by OpenAI first gained notoriety by helping users write better emails and term papers. Within months, a host of generative AI products sprang up from Google, Microsoft and Amazon and others. These tools are quickly becoming more than mere writing assistants.

In time, they will radically change healthcare, empower patients and redefine the doctor-patient relationship. To make sense of this bold vision for the future, this two-part article explores:

  1. The massive differences between generative AI and prior artificial intelligences
  2. How, for the first time in history, a technological innovation will democratize not just knowledge, but also clinical expertise, making medical prowess no longer the sole domain of healthcare professionals.

To understand why this time is different, it’s helpful to compare the limited power of the two earliest generations of AI against the near-limitless potential of the latest version.

Generation 1: Rules-Based Systems And The Dawn Of AI In Healthcare

The latter half of the 20th century ushered in the first generation of artificial intelligence, known as rule-based AI.

Programmed by computer engineers, this type of AI relies on a series of human-generated instructions (rules), enabling the technology to solve basic problems.

In many ways, the rule-based approach resembles a traditional medical-school pedagogy where medical students are taught hundreds of “algorithms” that help them translate a patient’s symptoms into a diagnosis.

These decision-making algorithms resemble a tree, beginning with a trunk (the patient’s chief complaint) and branching out from there. For example, if a patient complains of a severe cough, the doctor first assesses whether fever is present. If yes, the doctor moves to one set of questions and, if not, to a different set. Assuming the patient has been febrile (with fever), the next question is whether the patient’s sputum is normal or discolored. And once again, this leads to the next subdivision. Ultimately each end branch contains only a single diagnosis, which can range from bacterial, fungal or viral pneumonia to cancer, heart failure or a dozen other pulmonary diseases.

This first generation of AI could rapidly process data, sorting quickly through the entire branching tree. And in circumstances where the algorithm could accurately account for all possible outcomes, rule-based AI proved more efficient than doctors.

But patient problems are rarely so easy to analyze and categorize. Often, it’s difficult to separate one set of diseases from another at each branch point. As a result, this earliest form of AI wasn’t as accurate as doctors who combined medical science with their own intuition and experience. And because of its limitations, rule-based AI was rarely used in clinical practice.

Generation 2: Narrow AI And The Rise Of Specialized Systems

As the 21st century dawned, the second era of AI began. The introduction of neural networks, mimicking the human brain’s structure, paved the way for deep learning.

Narrow AI functioned very differently than its predecessors. Rather than researchers providing pre-defined rules, the second-gen system feasted on massive data sets, using them to discern patterns that the human mind, alone, could not.

In one example, researchers gave a narrow AI system thousands of mammograms, half showing malignant cancer and half benign. The model was able to quickly identify dozens of differences in the shape, density and shade of the radiological images, assigning impact factors to each that reflected the probability of malignancy. Importantly, this kind of AI wasn’t relying on heuristics (a few rules of thumb) the way humans do, but instead subtle variations between the malignant and normal exams that neither the radiologists nor software designers knew existed.

In contrast to rule-based AI, these narrow AI tools proved superior to the doctor’s intuition in terms of diagnostic accuracy. Still, narrow AI showed serious limitations. For one, each application is task specific. Meaning, a system trained to read mammograms can’t interpret brain scans or chest X-rays.

But the biggest limitation of narrow AI is that the system is only as good as the data it’s trained on. A glaring example of that weakness emerged when United Healthcare relied on narrow AI to identify its sickest patients and give them additional healthcare services.

In filtering through the data, researchers later discovered the AI had made a fatal assumption. Patients who received less medical care were categorized as healthier than patients who received more. In doing so, the AI failed to recognize that less treatment is not always the result of better health. This can also be the result of implicit human bias.

Indeed, when researchers went back and reviewed the outcomes, they found Black patients were being significantly undertreated and were, therefore, underrepresented in the group selected for additional medical services.

Media headlines proclaimed, “Healthcare algorithm has racial bias,” but it wasn’t the algorithm that had discriminated against Black patients. It was the result of physicians providing Black patients with insufficient and inequitable treatment. In other words, the problem was the humans, not narrow AI.

Generation 3: The Future Is Generative

Throughout history, humankind has produced a few innovations (printing press, internet, iPhone) that transformed society by democratizing knowledge—making information easier to access for everyone, not just the wealthy elite.

Now, generative AI is poised to go one step further, giving every individual access to not only knowledge but, more importantly, expertise as well.

Already, the latest AI tools allow users to create a stunning work of art in the style of Rembrandt without ever having taken a painting class. With large language models, people can record a hit song, even if they’ve never played a musical instrument. Individuals can write computer code, producing sophisticated websites and apps, despite never having enrolled in an IT course.

Future generations of generative AI will do the same in medicine, allowing people who never attended medical school to diagnose diseases and create a treatment plan as well as any clinician.

Already, one generative AI tool (Google’s Med-PaLM 2) passed the physician licensing exam with an expert level score. Another generative AI toolset responded to patient questions with advice that bested doctors in both accuracy and empathy. These tools can now write medical notes that are indistinguishable from the entries that physicians create and match residents’ ability to make complex diagnoses on difficult cases.

Granted, current versions require physician oversight and are nowhere close to replacing doctors. But at their present rate of exponential growth, these applications are expected to become at least 30 times more powerful in the next five years. As a result, they will soon empower patients in ways that were unimaginable even a year ago.

Unlike their predecessors, these models are pre-trained on datasets that encompass the near-totality of publicly available information—pulling from medical textbooks, journal articles, open-source platforms and the internet. In the not-distant future, these tools will be securely connected to electronic health records in hospitals, as well as to patient monitoring devices in the home. As generative AI feeds on this wealth of data, its clinical acumen will skyrocket.

Within the next five to 10 years, medical expertise will no longer be the sole domain of trained clinicians. Future generations of ChatGPT and its peers will put medical expertise in the hands of all Americans, radically altering the relationship between doctors and patients.

Whether physicians embrace this development or resist is uncertain. What is clear is the opportunity for improvement in American medicine. Today, an estimated 400,000 people die annually from misdiagnoses, 250,000 from medical errors, and 1.7 million from mostly preventable chronic diseases and their complications.

In the next article, I’ll offer a blueprint for Americans as they grapple to redefine the doctor-patient relationship in the context of generative AI. To reverse the healthcare failures of today, the future of medicine will have to belong to the empowered patient and the tech-savvy physician. The combination will prove vastly superior to either alone.

Healing Healthcare: Repairing The Last 5 Years Of Damage

Five years ago, I started the Fixing Healthcare podcast with the aim of spotlighting the boldest possible solutions—ones that could completely transform our nation’s broken medical system.

But since then, rather than improving, U.S. healthcare has fallen further behind its global peers, notching far more failures than wins.

In that time, the rate of chronic disease has climbed while life expectancy has fallen, dramatically. Nearly half of American adults now struggle to afford healthcare. In addition, a growing mental-health crisis grips our country. Maternal mortality is on the rise. And healthcare disparities are expanding along racial and socioeconomic lines.

Reflecting on why few if any of these recommendations have been implemented, I don’t believe the problem has been a lack of desire to change or the quality of ideas. Rather, the biggest obstacle has been the immense size and scope of the changes proposed.

To overcome the inertia, our nation will need to narrow its ambitions and begin with a few incremental steps that address key failures. Here are three actionable and inexpensive steps that elected officials and healthcare leaders can quickly take to improve our nation’s health: 

1. Shore Up Primary Care

Compared to the United States, the world’s most-effective and highest-performing healthcare systems deliver better quality of care at significantly lower costs.

One important difference between us and them: primary care.

In most high-income nations, primary care makes up roughly half of the physician workforce. In the United States, it accounts for less than 30% (with a projected shortage of 48,000 primary care physicians over the next decade).

Primary care—better than any other specialty—simultaneously increases life expectancy while lowering overall medical expenses by (a) screening for and preventing diseases and (b) helping patients with chronic illness avoid the deadliest and most-expensive complications (heart attack, stroke, cancer).

But considering that it takes at least three years after medical school to train a primary care physician, to make a dent in the shortage over the next five years the U.S. government must act immediately:

The first action is to expand resident education for primary care. Congress, which authorizes the funding, would allocate $200 million annually to create 1,000 additional primary-care residency positions each year. The cost would be less than 0.2% of federal spending on healthcare.

The second action requires no additional spending. Instead, the Centers for Medicare & Medicaid Services, which covers the cost of care for roughly half of all American adults, would shift dollars to narrow the $108,000 pay gap between primary care doctors and specialists. This will help attract the best medical students to the specialty.

Together, these actions will bolster primary care and improve the health of millions.

2. Use Technology To Expand Access, Lower Costs

A decade after the passage of the Affordable Care Act, 30 million Americans are without health insurance while tens of millions more are underinsured, limiting access to necessary medical care.

Furthermore, healthcare is expected to become even less affordable for most Americans. Without urgent action, national medical expenditures are projected to rise from $4.3 trillion to $7.2 trillion over the next eight years, and the Medicare trust fund will become insolvent.

With costs soaring, payers (businesses and government) will resist any proposal that expands coverage and, most likely, will look to restrict health benefits as premiums rise.

Almost every industry that has had to overcome similar financial headwinds did so with technology. Healthcare can take a page from this playbook by expanding the use of telemedicine and generative AI.

At the peak of the Covid-19 pandemic, telehealth visits accounted for 69% of all physician appointments as the government waived restrictions on usage. And, contrary to widespread fears at the time, patients and doctors rated the quality, convenience and safety of these virtual visits as excellent. However, with the end of Covid-19, many states are now restricting telemedicine, particularly when clinicians practice in a different state than the patient.

To expand telemedicine use—both for physical and mental health issues—state legislators and regulators will need to loosen restrictions on virtual care. This will increase access for patients and diminish the cost of medical care.

It doesn’t make sense that doctors can provide treatment to people who drive across state lines, but they can’t offer the same care virtually when the individual is at home.

Similarly, physicians who faced a shortage of hospital beds during the pandemic began to treat patients in their homes. As with telemedicine, the excellent quality and convenience of care drew praise from clinicians and patients alike.

Building on that success, doctors could combine wearable devices and generative AI tools like ChatGPT to monitor patients 24/7. Doing so would allow physicians to relocate care—safely and more affordably—from hospitals to people’s homes.

Translating this technology-driven opportunity into standard medical practice will require federal agencies like the FDA, NIH and CDC to encourage pilot projects and facilitate innovative, inexpensive applications of generative AI, rather than restricting their use.

3. Reduce Disparities In Medical Care

American healthcare is a system of haves and have-nots, where your income and race heavily determine the quality of care you receive.

Black patients, in particular, experience poorer outcomes from chronic disease and greater difficulty accessing state-of-the-art treatments. In childbirth, black mothers in the U.S. die at twice the rate of white women, even when data are corrected for insurance and financial status.

Generative AI applications like ChatGPT can help, provided that hospitals and clinicians embrace it for the purpose of providing more inclusive, equitable care.

Previous AI tools were narrow and designed by researchers to mirror how doctors practiced. As a result, when clinicians provided inferior care to Black patients, AI outputs proved equally biased. Now that we understand the problem of implicit human bias, future generations of ChatGPT can help overcome it.

The first step will be for hospitals leaders to connect electronic health record systems to generative AI apps. Then, they will need to prompt the technology to notify clinicians when they provide insufficient care to patients from different racial or socioeconomic backgrounds. Bringing implicit bias to consciousness would save the lives of more Black women and children during delivery and could go a long way toward reversing our nation’s embarrassing maternal mortality rate (along with improving the country’s health overall).

The Next Five Years

Two things are inevitable over the next five years. Both will challenge the practice of medicine like never before and each has the potential to transform American healthcare.

First, generative AI will provide patients with more options and greater control. Faced with the difficulty of finding an available doctor, patients will turn to chatbots for their physical and psychological problems.

Already, AI has been shown to be more accurate in diagnosing medical problems and even more empathetic than clinicians in responding to patient messages. The latest versions of generative AI are not ready to fulfill the most complex clinical roles, but they will be in five years when they are 30-times more powerful and capable.

Second, the retail giants (Amazon, CVS, Walmart) will play an ever-bigger role in care delivery. Each of these retailers has acquired primary care, pharmacy, IT and insurance capability and all appear focused on Medicare Advantage, the capitated option for people over the age of 65. Five years from now, they will be ready to provide the businesses that pay for the medical coverage of over 150 million Americans the same type of prepaid, value-based healthcare that currently isn’t available in nearly all parts of the country.

American healthcare can stop the current slide over the next five years if change begins now. I urge medical leaders and elected officials to lead the process by joining forces and implementing these highly effective, inexpensive approaches to rebuilding primary care, lowering medical costs, improving access and making healthcare more equitable.

There’s no time to waste. The clock is ticking.

Generative AI and its Future in Health Delivery

Context

Although Artificial intelligence has been around for 50 years and has experienced several starts and stops, the last 5 to 10 years have seen a considerable uptick in adoption, especially in healthcare. It’s embedded now in machine learning that enables faster and more precise imaging studies, clinical decision support tools in electronic medical records systems and many more. In recent months, its potential to play a bigger role, possibly replacing physician judgement among others, has received added attention.

The November 2022, the announcement of OpenAI’s ChatGPT platform drew widespread attention with speculation it might displace clinicians in diagnosing and treatment planning for patients. On March 22, 2023, tech moguls Elon Musk, Steve Wozniak and Andrew Yang called for a 6-month moratorium on generative AI stating: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” (1)  To date, more than 13,000 have signed on to their appeal. Per Lumeris CTO Jean-Claude Saghbini “Putting aside our own opinions as to whether or not a moratorium should be implemented, our recent experience of the last three years in the inability to have effective cross-governmental alignment on policy to fight the COVID pandemic suggests that global alignment on AI policy will be impossible”.

There’s widespread belief generative AI and GPT-4 are game changers in healthcare.

How, what, when and how much ($$$) are the big questions. The near-term issues associated with implementation–data-security, workforce usefulness, regulation, investment costs—are expected to be resolved eventually. Thus, it is highly likely that health systems, medical groups, health insurers and retail and digital health solution providers will operate in a widely-expanded AI-enabled world in the next 3-5 years.  

Questions

What role will AI and ChatGPT play in hospitals/health systems and other provider settings? Will development of AI systems more powerful than GPT-4 be suspended in response to the appeal? How is your organization preparing for the next wave of AI?

Key Takeaways from Discussion:

  • ‘Generative AI will not take the place of clinician judgement anytime soon. The processes of diagnosing and treating patients, especially complex conditions, will not be displaced. However, in primary and preventive health where standardization is more attainable, it will have profound impact perhaps sooner than in other areas.’
  • ‘GPT-4 et al will have profound impact on the delivery of healthcare and hospital operations, but there are many unknowns and risks associated with its use beyond routine tasks that can be standardized based on pattern recognition. ‘
  • ‘Continued development of platform solutions using GPT-4 and others in healthcare and other industries will accelerate. The moratorium will not happen. There’s too much at stake for investors and users.’
  • ‘Non-profit hospitals and health systems are struggling financially as a result of the supply and labor cost increases, declining reimbursement from payers and negative returns on investing activities (non-operating income). Caution is key, so AI-related investing will be conservative in the near-term. An exception would be AI solutions that mitigate workforce shortages or reduce administrative costs for documentation.’

Healthcare Regulators’ Outdated Thinking Will Cost American Lives

In a matter of months, ChatGPT has radically altered our nation’s views on artificial intelligence—uprooting old assumptions about AI’s limitations and kicking the door wide open for exciting new possibilities.

One aspect of our lives sure to be touched by this rapid acceleration in technology is U.S. healthcare. But the extent to which tech will improve our nation’s health depends on whether regulators embrace the future or cling stubbornly to the past.

Why our minds live in the past

In the 1760s, Scottish inventor James Watt revolutionized the steam engine, marking an extraordinary leap in engineering. But Watt knew that if he wanted to sell his innovation, he needed to convince potential buyers of its unprecedented power. With a stroke of marketing genius, he began telling people that his steam engine could replace 10 cart-pulling horses. People at time immediately understood that a machine with 10 “horsepower” must be a worthy investment. Watt’s sales took off. And his long-since-antiquated meaurement of power remains with us today.

Even now, people struggle to grasp the breakthrough potential of revolutionary innovations. When faced with a new and powerful technology, people feel more comfortable with what they know. Rather than embracing an entirely different mindset, they remain stuck in the past, making it difficult to harness the full potential of future opportunities.

Too often, that’s exactly how U.S. government agencies go about regulating advances in healthcare. In medicine, the consequences of applying 20th-century assumptions to 21st-century innovations prove fatal.  

Here are three ways regulators do damage by failing to keep up with the times:

1. Devaluing ‘virtual visits’

Established in 1973 to combat drug abuse, the Drug Enforcement Administration (DEA) now faces an opioid epidemic that claims more than 100,000 lives a year.

One solution to this deadly problem, according to public health advocates,  combines modern information technology with an effective form of addiction treatment.

Thanks to the Covid-19 Public Health Emergency (PHE) declaration, telehealth use skyrocketed during the pandemic. Out of necessity, regulators relaxed previous telemedicine restrictions, allowing more patients to access medical services remotely while enabling doctors to prescribe controlled substances, including buprenorphine, via video visits.

For people battling drug addiction, buprenorphine is a “Goldilocks” medication with just enough efficacy to prevent withdrawal yet not enough to result in severe respiratory depression, overdose or death. Research from the National Institutes of Health (NIH) found that buprenorphine improves retention in drug-treatment programs. It has helped thousands of people reclaim their lives.

But because this opiate produces slight euphoria, drug officials worry it could be abused and that telemedicine prescribing will make it easier for bad actors to push buprenorphine onto the black market. Now with the PHE declaration set to expire, the DEA has laid out plans to limit telehealth prescribing of buprenorphine.

The proposed regulations would let doctors prescribe a 30-day course of the drug via telehealth, but would mandate an in-person visit with a doctor for any renewals. The agency believes this will “prevent the online overprescribing of controlled medications that can cause harm.”

The DEA’s assumption that an in-person visit is safer and less corruptible than a virtual visit is outdated and contradicted by clinical research. A recent NIH study, for example, found that overdose deaths involving buprenorphine did not proportionally increase during the pandemic. Likewise, a Harvard study found that telemedicine is as effective as in-person care for opioid use disorder.

Of course, regulators need to monitor the prescribing frequency of controlled substances and conduct audits to weed out fraud. Furthermore, they should demand that prescribing physicians receive proper training and document their patient-education efforts concerning medical risks.

But these requirements should apply to all clinicians, regardless of whether the patient is physically present. After all, abuses can happen as easily and readily in person as online.

The DEA needs to move its mindset into the 21st century because our nation’s outdated approach to addiction treatment isn’t working. More than 100,000 deaths a year prove it.

2. Restricting an unrestrainable new technology

Technologists predict that generative AI, like ChatGPT, will transform American life, drastically altering our economy and workforce. I’m confident it also will transform medicine, giving patients greater (a) access to medical information and (b) control over their own health.

So far, the rate of progress in generative AI has been staggering. Just months ago, the original version of ChatGPT passed the U.S. medical licensing exam, but barely. Weeks ago, Google’s Med-PaLM 2 achieved an impressive 85% on the same exam, placing it in the realm of expert doctors.

With great technological capability comes great fear, especially from U.S. regulators. At the Health Datapalooza conference in February, Food and Drug Administration (FDA) Commissioner Robert M. Califf emphasized his concern when he pointed out that ChatGPT and similar technologies can either aid or exacerbate the challenge of helping patients make informed health decisions.

Worried comments also came from Federal Trade Commission, thanks in part to a letter signed by billionaires like Elon Musk and Steve Wozniak. They posited that the new technology “poses profound risks to society and humanity.” In response, FTC chair Lina Khan pledged to pay close attention to the growing AI industry.

Attempts to regulate generative AI will almost certainly happen and likely soon. But agencies will struggle to accomplish it.

To date, U.S. regulators have evaluated hundreds of AI applications as medical devices or “digital therapeutics.” In 2022, for example, Apple received premarket clearance from the FDA for a new smartwatch feature that lets users know if their heart rhythm shows signs of atrial fibrillation (AFib). For each AI product that undergoes FDA scrutiny, the agency tests the embedded algorithms for effectiveness and safety, similar to a medication.

ChatGPT is different. It’s not a medical device or digital therapy programmed to address a specific or measurable medical problem. And it doesn’t contain a simple algorithm that regulators can evaluate for efficacy and safety. The reality is that any GPT-4 user today can type in a query and receive detailed medical advice in seconds. ChatGPT is a broad facilitator of information, not a narrowly focused, clinical tool. Therefore, it defies the types of analysis regulators traditionally apply.

In that way, ChatGPT is similar to the telephone. Regulators can evaluate the safety of smartphones, measuring how much electromagnetic radiation it gives off or whether the device, itself, poses a fire hazard. But they can’t regulate the safety of how people use it. Friends can and often do give each other terrible advice by phone.  

Therefore, aside from blocking ChatGPT outright, there’s no way to stop individuals from asking it for a diagnosis, medication recommendation or help with deciding on alternative medical treatments. And while the technology has been temporarily banned in Italy, that’s unlikely to happen in the United States.  

If we want to ensure the safety of ChatGPT, improve health and save lives, government agencies should focus on educating Americans on this technology rather than trying to restrict its usage.

3. Preventing doctors from helping more people

Doctors can apply for a medical license in any state, but the process is time-consuming and laborious. As a result, most physicians are licensed only where they live. That deprives patients in the other 49 states access to their medical expertise.

The reason for this approach dates back 240 years. When the Bill of Rights passed in 1791, the practice of medicine varied greatly by geography. So, states were granted the right to license physicians through their state boards.

In 1910, the Flexner report highlighted widespread failures of medical education and recommended a standard curriculum for all doctors. This process of standardization culminated in 1992 when all U.S. physicians were required to take and pass a set of national medical exams. And yet, 30 years later, fully trained and board-certified doctors still have to apply for a medical license in every state where they wish to practice medicine. Without a second license, a doctor in Chicago can’t provide care to a patient across a state border in Indiana, even if separated by mere miles.

The PHE declaration did allow doctors to provide virtual care to patients in other states. However, with that policy expiring in May, physicians will again face overly restrictive regulations held over from centuries past.

Given the advances in medicine, the availability of technology and growing shortage of skilled clinicians, these regulations are illogical and problematic. Heart attacks, strokes and cancer know no geographic boundaries. With air travel, people can contract medical illnesses far from home. Regulators could safely implement a common national licensing process—assuming states would recognize it and grant a medical license to any doctor without a history of professional impropriety.

But that’s unlikely to happen. The reason is financial. Licensing fees support state medical boards. And state-based restrictions limit competition from out of state, allowing local providers to drive up prices.

To address healthcare’s quality, access and affordability challenges, we need to achieve economies of scale. That would be best done by allowing all doctors in the U.S. to join one care-delivery pool, rather than retaining 50 separate ones.

Doing so would allow for a national mental-health service, giving people in underserved areas access to trained therapists and helping reduce the 46,000 suicides that take place in America each year.

Regulators need to catch up

Medicine is a complex profession in which errors kill people. That’s why we need healthcare regulations. Doctors and nurses need to be well trained, so that life-threatening medications can’t fall into the hands of people who will misuse them.

But when outdated thinking leads to deaths from drug overdoses, prevents patients from improving their own health and limits access to the nation’s best medical expertise, regulators need to recognize the harm they’re doing.  

Healthcare is changing as technology races ahead. Regulators need to catch up.