Medical malpractice in the age of AI: Who will bear the blame?

https://www.linkedin.com/pulse/medical-malpractice-age-ai-who-bear-blame-robert-pearl-m-d–g2dec/

More than two-thirds of U.S. physicians have changed their mind about generative AI and now view it as beneficial to healthcare. But as AI grows more powerful and prevalent in medicine, apprehensions remain high among medical professionals.

For the last 18 months, I’ve examined the potential uses and misuses of generative AI in medicine; research that culminated in the new book ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine. Over that time, I’ve seen the concerns of clinicians evolve—from worries about AI’s reliability and, consequently, patient safety to a new set of fears: Who will be held liable when something goes wrong?

From safety to suits: A new AI fear emerges

Technology experts have grown increasingly certain that next-gen AI technologies will prove vastly safer and more reliable for patients, especially under expert human oversight. As evidence, recall that Google’s first medical AI model, Med-PaLM, achieved a mere “passing score” (>60%) on the U.S. medical licensing exam in late 2022. Five months later, its successor, Med-PaLM 2, scored at an “expert” doctor level (85%).

Since then, numerous studies have shown that generative AI increasingly outperforms medical professionals in various tasks. These include diagnosis, treatment decisions, data analysis and even expressing empathy.

Despite these technological advancements, errors in medicine can and will occur, regardless of whether the expertise comes from human clinicians or advanced AI.

Fault lines: Navigating AI’s legal terrain

Legal experts anticipate that as AI tools become more integrated into healthcare, determining liability will come down to whether errors result from AI decisions, human oversight or a combination of both.

For instance, if doctors use a generative AI tool in their offices for diagnosing or treating a patient and something goes wrong, the physician would likely be held liable, especially if it’s deemed that clinical judgement should have overridden the AI’s recommendations.

But the scenarios get more complex when generative AI is used without direct physician oversight. As an example, who is liable when patients rely on generative AI’s medical advice without ever consulting a doctor? Or what if a clinician encourages a patient to use an at-home AI tool for help interpreting wearable device data, and the AI’s advice leads to a serious health issue?

In a working paper, legal scholars from the universities of Michigan, Penn State and Harvard explored these challenges, noting: “Demonstrating the cause of an injury is already often hard in the medical context, where outcomes are frequently probabilistic rather than deterministic. Adding in AI models that are often nonintuitive and sometimes inscrutable will likely make causation even more challenging to demonstrate.”

AI on trial: A legal prognosis from Stanford Law

To get a better handle on the legal risks posed to clinicians when using AI, I spoke with Michelle Mello, professor of law and health policy at Stanford University and lead author of “Understanding Liability Risk from Using Health Care Artificial Intelligence Tools.”

That paper, published earlier this year in the New England Journal of Medicine, is based on hundreds of software-related tort cases and offers insights into the murky waters of AI liability, including how the courts might handle AI-related malpractice cases.

However, Mello pointed out that direct case law on any type of AI model remains “very sparse.” And when it comes to liability implications of using generative AI, specifically, there’s no public record of such cases being litigated.

“At the end of the day, it has almost always been the case that the physician is on the hook when things go wrong in patient care,” she noted but also added, “As long as physicians are using this to inform a decision with other information and not acting like a robot, deciding purely based on the output, I suspect they’ll have a fairly strong defense against most of the claims that might relate to their use of GPTs.”

She emphasized that while AI tools can improve patient care by enhancing diagnostics and treatment options, providers must be vigilant about the liability these tools could introduce. To minimize risk, she recommends four steps.

  1. Understand the limits of AI tools: AI should not be seen as a replacement for human judgment. Instead, it should be used as a supportive tool to enhance clinical decisions.
  2. Negotiate terms of use: Mello urges healthcare professionals to negotiate terms of service with AI developers like Nvidia, OpenAI, Google and others. This includes pushing back on today’s “incredibly broad” and “irresponsible” disclaimers that deny any liability for medical harm.
  3. Apply risk assessment tools: Mello’s team developed a framework that helps providers assess the liability risks associated with AI. It considers factors like the likelihood of errors, the potential severity of harm caused and whether human oversight can effectively mitigate these risks.
  4. Stay informed and prepared: “Over time, as AI use penetrates more deeply into clinical practice, customs will start to change,” Mello noted. Clinicians need to stay informed as the legal landscape shifts.

The high cost of hesitation: AI and patient safety

While concerns about the use of generative AI in healthcare are understandable, it’s critical to weigh these fears against the existing flaws in medical practice.

Each year, misdiagnoses lead to 371,000 American deaths while another 424,000 patients suffer permanent disabilities. Meanwhile, more than 250,000 deaths occur due to avoidable medical errors in the United States. Half a million people die annually from poorly managed chronic diseases, leading to preventable heart attacks, strokes, cancers, kidney failures and amputations.

Our nation’s healthcare professionals don’t have the time in their daily practice to address the totality of patient needs. That’s because the demand for medical services is higher than ever at a time when health insurers—with their restrictive policies and bureaucratic requirements—make it harder than ever to provide excellent care. Generative AI can help.

But it is imperative for policymakers, legal experts and healthcare professionals to collaborate on a framework that promotes the safe and effective use of this technology. As part of their work, they’ll need to address concerns over liability. Ultimately, they must recognize that the risks of not using generative AI to improve care will far outweigh the dangers posed by the technology itself. Only then can our nation reduce the enormous human toll resulting from our current medical failures.

Supreme Court overturns Roe v. Wade, eliminating the constitutional right to an abortion

https://mailchi.mp/3390763e65bb/the-weekly-gist-june-24-2022?e=d1e747d2d8

 The 6-3 decision in Dobbs v. Jackson Women’s Health Organization, challenging a Mississippi law banning most abortions after 15 weeks, overturns the nearly 50-year precedent providing a constitutional right to abortion. The opinion was little changed from a draft that was leaked last month, returning most decision making on abortion to states. At least 13 states have so called ‘trigger laws’ in place that will almost immediately make abortion illegal, and another 13 states are likely to pass similar laws.

The GistIn over half of states, existing or new laws will likely prevent pregnant people from accessing critical and evidence-based reproductive healthcare services, including medically safe abortion, miscarriage care, pregnancy termination for severe fetal anomalies, and endangerment of the childbearing parent’s life.

Patients in Texas, which passed one of the strictest abortion laws last year, have already been facing challenges obtaining prescriptions for medications for miscarriage and abortion care. Many state laws which criminalize providing the procedure put physicians and other medical providers in legal jeopardy.

And as legal experts point out, most malpractice insurance doesn’t protect physicians from damages incurred from criminal charges. 

Moreover, most laws have been written by legislators with little or no medical expertise, leading to lack of clarity about which potentially life-threatening situations, in what circumstances, merit pregnancy termination—forcing physicians to delay lifesaving obstetric care. (Read this NEJM piece to understand what this looks like for doctors and patients in Texas today.) Regardless, today’s decision will lead to increased mortality for pregnant people and those unable to seek safe abortion care. 

New York physician charged with manslaughter in patient death

Legal and Illegal Drug Overdose: Guide to Signs, Symptoms, and Help

A New York physician has been charged with manslaughter in the second degree and is facing other felonies related to the overdose death of a patient, New York Attorney General Letitia James announced Feb. 19. 

Sudipt Deshmukh, MD, allegedly prescribed a lethal mix of opioids and other controlled substances that resulted in the overdose death of a patient. The physician allegedly knew the patient struggled with addiction.

An indictment, unsealed Feb. 18, alleges that between 2006 and 2016, Dr. Deshmukh ignored his professional responsibilities by prescribing combinations of opioid painkillers and other controlled substances, including hydrocodone, methadone and morphine, without regard to the risk of death associated with the combinations of those drugs.  

Dr. Deshmukh is facing several felony charges, including healthcare fraud, for allegedly causing Medicare to pay for medically unnecessary prescriptions. 

The indictment comes after the attorney general’s office filed a felony complaint against Dr. Deshmukh in August. In 2019, the New York State Office of Professional Medical Conduct found that he committed several counts of misconduct. 

Medical ethics in pandemic times

https://www.axios.com/medical-ethics-clinical-trials-pandemic-eb77f819-76f1-45b0-af8a-cf181bc1607b.html

The Importance of Medical Ethics | Medical Ethics – theMSAG

The COVID-19 pandemic is rife with scientific and medical uncertainty, including debates about the ethics of using experimental treatments.

The big picture: As the global pandemic continues, the tension between providing the best available care for patients and performing trials to determine whether that care is effective risks complicating the medical response.

The big question: Is it unethical to withhold a possible treatment from someone who instead receives a placebo, or to continue to administer that treatment without having collected data on whether it works?

Driving the news: President Trump received an experimental monoclonal antibody cocktail via expanded access or “compassionate use,” which allows someone to access a treatment outside of a clinical trial before it is approved, provided their doctor, the drug company and the FDA agree.

  • Experts say his subsequent claims of the treatment being a cure risks reducing enrollment in clinical trials, flooding companies with requests for access to a limited number of doses and creating false hope for patients.
  • And the president’s treatment raised questions about fairness — would other COVID-19 patients have similar access?
  • “It’s important that we not say the president got access to a beneficial experimental intervention because we don’t know if it is beneficial or if there are adverse events associated with it, says Alex John London, director of the Center for Ethics and Policy at Carnegie Mellon University. 

He and other ethicists say the president’s treatment highlights a broader question about the ethical obligation doctors have to the science needed to determine if those treatments are effective.

Between the lines: Offering patients experimental COVID-19 drugs via emergency use authorizations, expanded access programs and compassionate use can slow needed clinical trials.

  • Researchers have struggled to enroll people in clinical trials in which they may receive a placebo if patients can access a drug directly.
  • One example: “There’s been some hiccups with the expanded access use for convalescent plasma, because it was something that precluded people from enrolling in a randomized control trial, so it took longer, and we still don’t quite know how well convalescent plasma works,” says Amesh Adalja, an infectious disease physician and senior scholar at the Johns Hopkins Center for Health Security.

More than 100,000 COVID-19 patients at almost 2,800 U.S. hospitals received convalescent plasma from people who survived the virus and developed antibodies to it.

  • “It’s easy for people to say you enrolled 100,000 people, there should have been a trial. But a small number of those 2,800 hospitals would have been capable of doing those trials,” says the Mayo Clinic’s Michael Joyner, who leads the program.
  • There are now smaller trials taking place to answer questions about the effectiveness of plasma in treating the disease in different stages.
  • But if this happens again, Joyner says programs at academic medical centers should be peeled off earlier to form clinical trials run in parallel.

The gold standard for determining whether a treatment works is through randomized controlled trials in which people are randomly assigned to receive a treatment or to be in a control group.

  • In the uncertainty and urgency of a pandemic, some physicians argue randomizing people to receive a placebo goes against physicians’ ethics and that it is better to do something to help patients than do nothing.
  • “That’s a false dichotomy because the question is, what should we do?” says London.

From a doctor’s perspectiveit’s important to weigh the collective value of theearly drug data and the individual needs of the patient, Adalja says.

  • “I do think you have to be extra careful when you’re thinking about drugs that you don’t have strong randomized control trial data for, or the data is incomplete or inconclusive,” he adds.
  • “What people have to ask themselves is what constitutes evidence or proof and where do you want to make the bets in a pandemic?” says Joyner.
  • “There is a moral, legal and public health obligation to do those trials before people use those products,” says Alison Bateman-House, a professor of medical ethics at NYU’s Grossman School of Medicine who co-chairs an international working group on pre-approval access to treatments.
  • She says she understands the emotional pull on doctors to help patients whose health is quickly deteriorating, “but it is not evidence-based medicine.”

“There is no ethical obligation to give anyone an unproven substance.”

Alison Bateman-House, NYU Grossman School of Medicine

In a forthcoming paper, London argues that when medical professionals don’t have the knowledge they need to treat patients, it is their responsibility “to band together and run studies to get evidence to discharge [their] very ancient medical obligation.”

  • Medical ethics should be updated to include a responsibility to learn in the face of uncertainty, says London, who was part of a committee that called for research to be incorporated into the response to the Ebola outbreak in West Africa in 2014.
  • The U.K.’s large randomized RECOVERY trial is based in part on the Ebola experience, says London. “Because of it, we know dexamethasone is effective and hydroxychloroquine is not.”

What to watch: How the FDA’s handling of treatments during the pandemic influences other drugs and diseases once the pandemic ends.

The bottom line: “Medicine doesn’t have a good handle on uncertainty, and that is a problem,” says London.