How cognitive biases impact healthcare decisions

https://www.linkedin.com/pulse/how-cognitive-biases-impact-healthcare-decisions-robert-pearl-m-d–ti5qc/?trackingId=eQnZ0um3TKSzV0NYFyrKXw%3D%3D

Day one of the healthcare strategy course I teach in the Stanford Graduate School of Business begins with this question: “Who here receives excellent medical care?”

Most of the students raise their hands confidently. I look around the room at some of the most brilliant young minds in business, finance and investing—all of them accustomed to making quick yet informed decisions. They can calculate billion-dollar deals to the second decimal point in their heads. They pride themselves on being data driven and discerning.

Then I ask, “How do you know you receive excellent care?”

The hands slowly come down and room falls silent. In that moment, it’s clear these future business leaders have reached a conclusion without a shred of reliable data or evidence.

Not one of them knows how often their doctors make diagnostic or technical errors. They can’t say whether their health system’s rate of infection or medical error is high, average or low.

What’s happening is that they’re conflating service with clinical quality. They assume a doctor’s bedside manner correlates with excellent outcomes.

These often false assumptions are part of a multi-millennia-long relationship wherein patients are reluctant to ask doctors uncomfortable but important questions: “How many times have you performed this procedure over the past year and how many patients experienced complications?” “What’s the worst outcome a patient of yours had during and after surgery?”

The answers are objective predictors of clinical excellence. Without them, patients are likely to become a victim of the halo effect—a cognitive bias where positive traits in one area (like friendliness) are assumed to carry over to another (medical expertise).

This is just one example of the many subconscious biases that distort our perceptions and decision-making.

From the waiting room to the operating table, these biases impact both patients and healthcare professionals with negative consequences. Acknowledging these biases isn’t just an academic exercise. It’s a crucial step toward improving healthcare outcomes.

Here are four more cognitive errors that cause harm in healthcare today, along with my thoughts on what can be done to mitigate their effects:

Availability bias

You’ve probably heard of the “hot hand” in Vegas—a lucky streak at the craps table that draws big cheers from onlookers. But luck is an illusion, a product of our natural tendency to see patterns where none exist. Nothing about the dice changes based on the last throw or the individual shaking them.

This mental error, first described as “availability bias” by psychologists Amos Tversky and Daniel Kahneman, was part of groundbreaking research in the 1970s and ‘80s in the field of behavioral economics and cognitive psychology. The duo challenged the prevailing assumption that humans make rational choices.

Availability bias, despite being identified nearly 50 years ago, still plagues human decision making today, even in what should be the most scientific of places: the doctor’s office.

Physicians frequently recommend a treatment plan based on the last patient they saw, rather than considering the overall probability that it will work. If a medication has a 10% complication rate, it means that 1 in 10 people will experience an adverse event. Yet, if a doctor’s most recent patient had a negative reaction, the physician is less likely to prescribe that medication to the next patient, even when it is the best option, statistically.

Confirmation bias

Have you ever had a “gut feeling” and stuck with it, even when confronted with evidence it was wrong? That’s confirmation bias. It skews our perceptions and interpretations, leading us to embrace information that aligns with our initial beliefs—and causing us to discount all indications to the contrary.

This tendency is heightened in a medical system where physicians face intense time pressures. Studies indicate that doctors, on average, interrupt patients within the first 11 seconds of being asked “What brings you here today?” With scant information to go on, doctors quickly form a hypothesis, using additional questions, diagnostic testing and medical-record information to support their first impression.

Doctors are well trained, and their assumptions prove more accurate than incorrect overall. Nevertheless, hasty decisions can be dangerous. Each year in the United States, an estimated 371,000 patients die from misdiagnoses.

Patients aren’t immune to confirmation bias, either. People with a serious medical problem commonly seek a benign explanation and find evidence to justify it. When this happens, heart attacks are dismissed as indigestion, leading to delays in diagnosis and treatment.

Framing effect

In 1981, Tversky and Kahneman asked subjects to help the nation prepare for a hypothetical viral outbreak. They explained that if the disease was left untreated, it would kill 600 people. Participants in one group were told that an available treatment, although risky, would save 200 lives. The other group was told that, despite the treatment, 400 people would die. Although both descriptions lead to the same outcome—200 people surviving and 400 dying—the first group favored the treatment, whereas the second group largely opposed it.

The study illustrates how differently people can react to identical scenarios based on how the information is framed. Researchers have discovered that the human mind magnifies and experiences loss far more powerfully than positive gains. So, patients will consent to a chemotherapy regiment that has a 20% chance of cure but decline the same treatment when told it has 80% likelihood of failure.

Self-serving bias

The best parts about being a doctor are saving and improving lives. But there are other perks, as well.

Pharmaceutical and medical-device companies aggressively reward physicians who prescribe and recommend their products. Whether it’s a sponsored dinner at a Michelin restaurant or even a pizza delivered to the office staff, the intention of the reward is always the same: to sway the decisions of doctors.

And yet, physicians swear that no meal or gift will influence their prescribing habits. And they believe it because of “self-serving bias.”

In the end, it’s patients who pay the price. Rather than receiving a generic prescription for a fraction of the cost, patients end up paying more for a brand-name drug because their doctor—at a subconscious level—doesn’t want to lose out on the perks.

Thanks to the “Sunshine Act,” patients can check sites like ProPublica’s Dollars for Docs to find out whether their healthcare professional is receiving drug- or device-company money (and how much).

Reducing subconscious bias

These cognitive biases may not be the reason U.S. life expectancy has stagnated for the past 20 years, but they stand in the way of positive change. And they contribute to the medical errors that harm patients.

A study published this month in JAMA Internal Medicine found that 1 in 4 hospital patients who either died or were transferred to the ICU had been affected by a diagnostic mistake. Knowing this, you might think cognitive biases would be a leading subject at annual medical conferences and a topic of grave concern among healthcare professionals. You’d be wrong. Inside the culture of medicine, these failures are commonly ignored.

The recent story of an economics professor offers one possible solution. Upon experiencing abdominal pain, he went to a highly respected university hospital. After laboratory testing and observation, his attending doctor concluded the problem wasn’t serious—a gallstone at worst. He told the patient to go home and return for outpatient workup.

The professor wasn’t convinced. Fearing that the medical problem was severe, the professor logged onto ChatGPT (a generative AI technology) and entered his symptoms. The application concluded that there was a 40% chance of a ruptured appendix. The doctor reluctantly ordered an MRI, which confirmed ChatGPT’s diagnosis.

Future generations of generative AI, pretrained with data from people’s electronic health records and fed with information about cognitive biases, will be able to spot these types of errors when they occur.

Deviation from standard practice will result in alerts, bringing cognitive errors to consciousness, thus reducing the likelihood of misdiagnosis and medical error. Rather than resisting this kind of objective second opinion, I hope clinicians will embrace it. The opportunity to prevent harm would constitute a major advance in medical care.

How generative AI will change the doctor-patient relationship

https://www.linkedin.com/pulse/how-generative-ai-change-doctor-patient-relationship-pearl-m-d-/?trackingId=sNn87WorSt%2BPg3F0SxKUIw%3D%3D

After decades of “doctor knows best,” the traditional physician-patient relationship is on the verge of a monumental shift. Generative AI tools like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing are poised to give people significantly more power and control—not just over their personal lives and professional tasks, but over their own medical health, as well.

As these tools become exponentially smarter, safer and more reliable (an estimated 32 times more powerful in the next five years), everyday Americans will gain access to unparalleled medical expertise—doled out in easily understandable terms, at any time, from any place.

Already, Google’s Med-PaLM 2 has scored an expert-level 86.5% on the U.S. medical license exam while other AI tools have matched the skill and accuracy of average doctors in diagnosing complex medical diseases.

Soon, AI tools will be able to give patients detailed information about their specific medical problems by integrating with health monitors and electronic medical records (such EHR projects are already underway at Oracle/Cerner and Epic). In time, people will be able to self-diagnose and manage their own diseases as accurately and competently as today’s clinicians.

This newfound expertise will shake the very foundation of clinical practice.

Although public health experts have long touted the concept of clinicians and patients working together through shared decision-making, this rarely happens in practice. Generative AI will alter that reality.

Building on part one of this article, which explained why generative AI constitutes a quantum leap ahead of all the tech that came before it, part two provides a blueprint for strengthening the doctor-patient alliance in the era of generative AI.

Patients Today: Sick And Confused

To understand how generative AI will impact the practice of medicine, it’s best to look closer at the current doctor-patient dynamic.

The relationship has undergone significant evolution. In the past century, patients and doctors held close, enduring relationships, built on trust and a deep understanding of the patient’s individual needs. These bonds were characterized by a strong sense of personal connection, as doctors had the time to listen to their patients’ concerns and provided not only medical treatment but also emotional support.

Today, the doctor-patient relationship remains vitally important, but it has undergone several meaningful changes. While medical advancements have greatly expanded the possibilities for diagnosis and treatment, the relationship itself has suffered from less trust and a more transactional focus. The average visit lasts just 15 minutes, barely enough time to address the patient’s current medical concerns. The doctor’s computer and electronic healthcare record systems sit, quite literally, between doctors and patients. The result is that patients feel rushed and find their medical care increasingly impersonal. Modern healthcare is characterized by time constraints, administrative burdens and a focus on efficiency. This can lead to a sense of impersonality and decreased communication between doctors and patients.

But throughout these changes, one thing has remained constant. The doctor-patient relationship, which dates back more than five millennia, has always existed on an uneven playing field, with patients forced to rely almost entirely on doctors to understand their diseases and what to do about them.

Though patients can and do access the internet for a list of possible diagnoses and treatment options, that’s not the same as possessing medical expertise. In fact, sorting through dozens of online sources—often with conflicting, inaccurate, outdated and self-serving information—proves more confusing than clarifying. Nowhere can web-surfers find personalized and credible advice based on their age, medical history, genetic makeup, current medications and laboratory results.

What’s needed now is modern doctor-patient relationship, one that is strong enough to meet the demands of medicine today and restore the vital, personal and emotional connections of the past.    

Patients Tomorrow: Self-Diagnosing And Confident

In the future, generative AI will alter the doctor-patient dynamic by leveling the playing field.

Already, consumer AI tools can equip users with not just knowledge, but expertise. They allow the average person to create artistic masterpieces, produce hit songs and write code with unimagined sophistication. Next generations will offer a similar ability for patients, even those without a background in science or medicine.

Like a digitized second opinion, generative AI will shrink the knowledge gap between doctors and patients in ways that search engines can’t. By accessing millions of medical texts, peer-reviewed journals and scientific articles, ChatGPT will deliver accurate and unbiased medical expertise in layman’s language. And unlike internet sources, generative AI tools don’t have built-in financial incentives or advertising models that might skew responses.

To help patients and doctors navigate the upcoming era of generative AI, here’s a model for the future of medical practice based on proven approaches in education:  

Introducing The ‘Flipped Healthcare’ Model

The “flipped classroom” can be traced back nearly four decades, but it became popularized in the United States in the early 2000s through the Khan Academy in Northern California.

Students begin the learning process by watching videos and engaging with interactive tools online rather than sitting through traditional lectures. This pre-class preparation (or “homework in advance”) allows people to learn at their own pace. Moreover, it enhances classroom discussions, letting teachers and students dive much deeper into topics than they ever could before. Indeed, students spend time in class applying knowledge and collaborating to solve problems—not merely listening and taking notes.  

The introduction of generative AI opens the door to a similar approach in healthcare. Here’s how that might work in practice:

  1. Pre-Consultation Learning: Before visiting a doctor, patients would use generative AI tools to understand their symptoms or medical conditions. This foundational knowledge would accelerate the diagnostic process and enhance patient understanding. Even in the absence of advanced diagnostic testing (X-rays or bloodwork), this pre-consultation phase allows the patient to understand the questions their clinicians will ask and the steps they will take.
  2. In-Depth Human Interactions: With the patient’s knowledge base already established, consultations will dive deep into proactive health strategies and/or long-term chronic-disease management solutions, rather than having to start at square one. This approach maximizes the time patients and clinicians spend together. It also addresses the reality that at least 50% of patients leave the doctor’s office unsure of what they’ve been told.
  3. Home Monitoring: For the 60% of American patients living with chronic diseases, generative AI combined with wearable monitors will provide real-time feedback, thereby optimizing clinical outcomes. These patients, instead of going in for periodic visits (every three to six months), will obtain daily medical analysis and insights. And in cases where generative AI spots trouble (e.g., health data deviates from the doctor’s expectations), the provider will be able to update medications immediately. And when the patient is doing well, physicians can cancel follow-up visits, eliminating wasted time for all.
  4. Hospital At Home: Inpatient (hospital) care accounts for 30% of all healthcare costs. By continuously monitoring patients with medical problems like mild pneumonia and controllable bacterial infections, generative AI (combined with home monitoring devices and telemedicine access) would allow individuals to be treated in the comfort of their home, safely and more affordably than today.
  5. Lifestyle Medicine: Generative AI would support preventive health measures and lifestyle changes, reducing the overall demand for in-person clinical care. Studies confirm that focusing on diet, exercise and recommended screenings can reduce the deadliest complications of chronic disease (heart attack, stroke, cancer) by 30% or more. Decreasing the need for intensive procedures is the best way to make healthcare affordable and address the projected shortage of doctors and nurses in the future.

The Future: Collaborative Care For Superior Outcomes

The U.S. healthcare model often leaves patients feeling frustrated and overwhelmed. Meanwhile, time constraints placed on doctors lead to rushed consultations and misdiagnoses, which cause an estimated 800,000 deaths and disabilities annually.

The “flipped” approach, inspired by the Khan Academy, leverages the patient expertise that generative AI will create. Following this model will free up clinician time to make the most of every visit. Implementing this blueprint will require improvements in AI technology and an evolution of medical culture, but it offers the opportunity to make the doctor-patient relationship more collaborative and create empowered patients who will improve their health.

Talk with educators at the Khan Academy, and they will tell you how their innovative model results in better-educated students. They’ll also tell you how much more satisfied teachers and students are compared to those working in the traditional educational system. The same can be true for American medicine.

The AI-empowered patient is coming. Are doctors ready?

https://www.linkedin.com/pulse/ai-empowered-patient-coming-doctors-ready-robert-pearl-m-d-/

Artificial intelligence (AI) has long been heralded as an emerging force in medicine. Since the early 2000s, promises of a technological transformation in healthcare have echoed through the halls of hospitals and at medical meetings.

But despite 20-plus years of hype, AI’s impact on medical practice and America’s health remains negligible (with minor exceptions in areas like radiological imaging and predictive analytics).

As such, it’s understandable that physicians and healthcare administrators are skeptical about the benefits that generative AI tools like ChatGPT will provide.

They shouldn’t be. This next generation of AI is unlike any technology that has come before. 

The launch of ChatGPT in late 2022 marked the dawn of a new era. This “large language model” developed by OpenAI first gained notoriety by helping users write better emails and term papers. Within months, a host of generative AI products sprang up from Google, Microsoft and Amazon and others. These tools are quickly becoming more than mere writing assistants.

In time, they will radically change healthcare, empower patients and redefine the doctor-patient relationship. To make sense of this bold vision for the future, this two-part article explores:

  1. The massive differences between generative AI and prior artificial intelligences
  2. How, for the first time in history, a technological innovation will democratize not just knowledge, but also clinical expertise, making medical prowess no longer the sole domain of healthcare professionals.

To understand why this time is different, it’s helpful to compare the limited power of the two earliest generations of AI against the near-limitless potential of the latest version.

Generation 1: Rules-Based Systems And The Dawn Of AI In Healthcare

The latter half of the 20th century ushered in the first generation of artificial intelligence, known as rule-based AI.

Programmed by computer engineers, this type of AI relies on a series of human-generated instructions (rules), enabling the technology to solve basic problems.

In many ways, the rule-based approach resembles a traditional medical-school pedagogy where medical students are taught hundreds of “algorithms” that help them translate a patient’s symptoms into a diagnosis.

These decision-making algorithms resemble a tree, beginning with a trunk (the patient’s chief complaint) and branching out from there. For example, if a patient complains of a severe cough, the doctor first assesses whether fever is present. If yes, the doctor moves to one set of questions and, if not, to a different set. Assuming the patient has been febrile (with fever), the next question is whether the patient’s sputum is normal or discolored. And once again, this leads to the next subdivision. Ultimately each end branch contains only a single diagnosis, which can range from bacterial, fungal or viral pneumonia to cancer, heart failure or a dozen other pulmonary diseases.

This first generation of AI could rapidly process data, sorting quickly through the entire branching tree. And in circumstances where the algorithm could accurately account for all possible outcomes, rule-based AI proved more efficient than doctors.

But patient problems are rarely so easy to analyze and categorize. Often, it’s difficult to separate one set of diseases from another at each branch point. As a result, this earliest form of AI wasn’t as accurate as doctors who combined medical science with their own intuition and experience. And because of its limitations, rule-based AI was rarely used in clinical practice.

Generation 2: Narrow AI And The Rise Of Specialized Systems

As the 21st century dawned, the second era of AI began. The introduction of neural networks, mimicking the human brain’s structure, paved the way for deep learning.

Narrow AI functioned very differently than its predecessors. Rather than researchers providing pre-defined rules, the second-gen system feasted on massive data sets, using them to discern patterns that the human mind, alone, could not.

In one example, researchers gave a narrow AI system thousands of mammograms, half showing malignant cancer and half benign. The model was able to quickly identify dozens of differences in the shape, density and shade of the radiological images, assigning impact factors to each that reflected the probability of malignancy. Importantly, this kind of AI wasn’t relying on heuristics (a few rules of thumb) the way humans do, but instead subtle variations between the malignant and normal exams that neither the radiologists nor software designers knew existed.

In contrast to rule-based AI, these narrow AI tools proved superior to the doctor’s intuition in terms of diagnostic accuracy. Still, narrow AI showed serious limitations. For one, each application is task specific. Meaning, a system trained to read mammograms can’t interpret brain scans or chest X-rays.

But the biggest limitation of narrow AI is that the system is only as good as the data it’s trained on. A glaring example of that weakness emerged when United Healthcare relied on narrow AI to identify its sickest patients and give them additional healthcare services.

In filtering through the data, researchers later discovered the AI had made a fatal assumption. Patients who received less medical care were categorized as healthier than patients who received more. In doing so, the AI failed to recognize that less treatment is not always the result of better health. This can also be the result of implicit human bias.

Indeed, when researchers went back and reviewed the outcomes, they found Black patients were being significantly undertreated and were, therefore, underrepresented in the group selected for additional medical services.

Media headlines proclaimed, “Healthcare algorithm has racial bias,” but it wasn’t the algorithm that had discriminated against Black patients. It was the result of physicians providing Black patients with insufficient and inequitable treatment. In other words, the problem was the humans, not narrow AI.

Generation 3: The Future Is Generative

Throughout history, humankind has produced a few innovations (printing press, internet, iPhone) that transformed society by democratizing knowledge—making information easier to access for everyone, not just the wealthy elite.

Now, generative AI is poised to go one step further, giving every individual access to not only knowledge but, more importantly, expertise as well.

Already, the latest AI tools allow users to create a stunning work of art in the style of Rembrandt without ever having taken a painting class. With large language models, people can record a hit song, even if they’ve never played a musical instrument. Individuals can write computer code, producing sophisticated websites and apps, despite never having enrolled in an IT course.

Future generations of generative AI will do the same in medicine, allowing people who never attended medical school to diagnose diseases and create a treatment plan as well as any clinician.

Already, one generative AI tool (Google’s Med-PaLM 2) passed the physician licensing exam with an expert level score. Another generative AI toolset responded to patient questions with advice that bested doctors in both accuracy and empathy. These tools can now write medical notes that are indistinguishable from the entries that physicians create and match residents’ ability to make complex diagnoses on difficult cases.

Granted, current versions require physician oversight and are nowhere close to replacing doctors. But at their present rate of exponential growth, these applications are expected to become at least 30 times more powerful in the next five years. As a result, they will soon empower patients in ways that were unimaginable even a year ago.

Unlike their predecessors, these models are pre-trained on datasets that encompass the near-totality of publicly available information—pulling from medical textbooks, journal articles, open-source platforms and the internet. In the not-distant future, these tools will be securely connected to electronic health records in hospitals, as well as to patient monitoring devices in the home. As generative AI feeds on this wealth of data, its clinical acumen will skyrocket.

Within the next five to 10 years, medical expertise will no longer be the sole domain of trained clinicians. Future generations of ChatGPT and its peers will put medical expertise in the hands of all Americans, radically altering the relationship between doctors and patients.

Whether physicians embrace this development or resist is uncertain. What is clear is the opportunity for improvement in American medicine. Today, an estimated 400,000 people die annually from misdiagnoses, 250,000 from medical errors, and 1.7 million from mostly preventable chronic diseases and their complications.

In the next article, I’ll offer a blueprint for Americans as they grapple to redefine the doctor-patient relationship in the context of generative AI. To reverse the healthcare failures of today, the future of medicine will have to belong to the empowered patient and the tech-savvy physician. The combination will prove vastly superior to either alone.

Generative AI and its Future in Health Delivery

Context

Although Artificial intelligence has been around for 50 years and has experienced several starts and stops, the last 5 to 10 years have seen a considerable uptick in adoption, especially in healthcare. It’s embedded now in machine learning that enables faster and more precise imaging studies, clinical decision support tools in electronic medical records systems and many more. In recent months, its potential to play a bigger role, possibly replacing physician judgement among others, has received added attention.

The November 2022, the announcement of OpenAI’s ChatGPT platform drew widespread attention with speculation it might displace clinicians in diagnosing and treatment planning for patients. On March 22, 2023, tech moguls Elon Musk, Steve Wozniak and Andrew Yang called for a 6-month moratorium on generative AI stating: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” (1)  To date, more than 13,000 have signed on to their appeal. Per Lumeris CTO Jean-Claude Saghbini “Putting aside our own opinions as to whether or not a moratorium should be implemented, our recent experience of the last three years in the inability to have effective cross-governmental alignment on policy to fight the COVID pandemic suggests that global alignment on AI policy will be impossible”.

There’s widespread belief generative AI and GPT-4 are game changers in healthcare.

How, what, when and how much ($$$) are the big questions. The near-term issues associated with implementation–data-security, workforce usefulness, regulation, investment costs—are expected to be resolved eventually. Thus, it is highly likely that health systems, medical groups, health insurers and retail and digital health solution providers will operate in a widely-expanded AI-enabled world in the next 3-5 years.  

Questions

What role will AI and ChatGPT play in hospitals/health systems and other provider settings? Will development of AI systems more powerful than GPT-4 be suspended in response to the appeal? How is your organization preparing for the next wave of AI?

Key Takeaways from Discussion:

  • ‘Generative AI will not take the place of clinician judgement anytime soon. The processes of diagnosing and treating patients, especially complex conditions, will not be displaced. However, in primary and preventive health where standardization is more attainable, it will have profound impact perhaps sooner than in other areas.’
  • ‘GPT-4 et al will have profound impact on the delivery of healthcare and hospital operations, but there are many unknowns and risks associated with its use beyond routine tasks that can be standardized based on pattern recognition. ‘
  • ‘Continued development of platform solutions using GPT-4 and others in healthcare and other industries will accelerate. The moratorium will not happen. There’s too much at stake for investors and users.’
  • ‘Non-profit hospitals and health systems are struggling financially as a result of the supply and labor cost increases, declining reimbursement from payers and negative returns on investing activities (non-operating income). Caution is key, so AI-related investing will be conservative in the near-term. An exception would be AI solutions that mitigate workforce shortages or reduce administrative costs for documentation.’