AI in medicine: 3 easy questions to separate hype from reality

https://www.linkedin.com/pulse/ai-medicine-3-easy-questions-separate-hype-from-robert-pearl-m-d–ctznc/

Artificial intelligence has long been heralded as a transformative force in medicine. Yet, until recently, its potential has remained largely unfulfilled.

Consider the story of MYCIN, a “rule-based” AI system developed in the 1970s at Stanford University to help diagnose infections and recommend antibiotics. Though MYCIN showed early promise, it relied on rigid, predetermined rules and lacked the flexibility to handle unexpected or complex cases that arise in real-world medicine. Ultimately, the technology of the time couldn’t match the nuanced judgment of skilled clinicians, and MYCIN never achieved widespread clinical use.

Fast forward to 2011, when IBM’s Watson gained global notoriety by besting renowned Jeopardy! champions Ken Jennings and Brad Rutter. Soon after, IBM applied Watson’s vast computing power to healthcare, envisioning it as a gamechanger in oncology. Tasked with synthesizing data from medical literature and patient records at Memorial Sloan Kettering, Watson aimed to recommend tailored cancer treatments.

However, the AI struggled to provide reliable, relevant recommendations—not because of any computational shortcoming but due to inconsistent, often incomplete, data sources. These included imprecise electronic health record entries and research articles that leaned too heavily toward favorable conclusions, failing to hold up in real-world clinical settings. IBM shut down the project in 2020.

Today, healthcare and tech leaders question whether the latest wave of AI tools—including much-heralded generative artificial intelligence models—will deliver on their promise in medicine or become footnotes in history like MYCIN and Watson.

Anthropic CEO Dario Amodei is among the AI optimists. Last month, in a sprawling 15,000-word essay, he predicted that AI would soon reshape humanity’s future. He claimed that by 2026, AI tools (presumably including Anthropic’s Claude) will become “smarter than a Nobel Prize winner.”

Specific to human health, Amodei touted AI’s ability to eliminate infectious diseases, prevent genetic disorders and double life expectancy to 150 years—all within the next decade.

While I admire parts of Amodei’s vision, my technological and medical background makes me question some of his most ambitious predictions.

When people ask me how to separate AI hype from reality in medicine, I suggest starting with three critical questions:

Question 1: Will the AI solution speed up a process or task that humans could eventually complete on their own?

Sometimes, scientists have the knowledge and expertise to solve complex medical problems but are limited by time and cost. In these situations, AI tools can deliver remarkable breakthroughs.

Consider AlphaFold2, a system developed by Google DeepMind to predict how proteins fold into their three-dimensional structures. For decades, researchers struggled to map these large, intricate molecules—the exact shape of each protein requiring years and millions of dollars to decipher. Yet, understanding these structures is invaluable, as they reveal how proteins function, interact and contribute to diseases.

With deep learning and massive datasets, AlphaFold2 accomplished in days what would have taken labs decades, predicting hundreds of proteins’ structures. Within four years, it mapped all known proteins—a feat that won DeepMind researchers a Nobel Prize in Chemistry and is now accelerating drug discovery and medical research.

Another example is a collaborative project between the University of Pittsburgh and Carnegie Mellon, where AI analyzed electronic health records to identify adverse drug interactions. Traditionally, this process took months of manual review to uncover just a few risks. With AI, researchers were able to examine thousands of medications in days, drastically improving speed and accuracy.

These achievements show that when science has a clear path but lacks the speed, tools and scale for execution, AI can bridge the gap. In fact, if today’s generative AI technology existed in the 1990s, ChatGPT estimates it could have sequenced the entire human genome in less than a year—a project that originally took 13 years and $2.7 billion.

Applying this criterion to Amodei’s assertion that AI will soon eliminate most infectious diseases, I believe this goal is realistic. Today’s AI technology already analyzes vast amounts of data on drug efficacy and side effects, discovering new uses for existing medications. AI is also proving effective in guiding the development of new drugs and may help address the growing issue of antibiotic resistance. I agree with Amodei that AI will be able to accomplish in a few years what otherwise would have taken scientists decades, offering fresh hope in the fight against human pathogens.

Question 2: Does the complexity of human genetics make the problem unsolvable, no matter how smart the technology?

Imagine searching for a needle in a giant haystack. When a single answer is hidden within mountains of data, AI can find it much faster than humans alone. But if that “needle” is metallic dust, scattered across multiple haystacks, the challenge becomes insurmountable, even for AI.

This analogy captures why certain medical problems remain beyond AI’s reach. In his essay, Amodei predicts that generative AI will eliminate most genetic disorders, cure cancer and prevent Alzheimer’s within a decade.

While AI will undoubtedly deepen our understanding of the human genome, many of the diseases Amodei highlights as curable are “multifactorial,” meaning they result from the combined impact of dozens of genes, plus environmental and lifestyle factors. To better understand why this complexity limits AI’s reach, let’s first examine simpler, single-gene disorders, where the potential for AI-driven treatment is more promising.

For certain genetic disorders, like BRCA-linked cancers or sickle cell disease that result from a single-gene abnormality, AI can play a valuable role by helping researchers identify and potentially use CRISPR, an advanced gene-editing tool, to directly edit these mutations to reduce disease risk.

Yet even with single-gene conditions, treatment is complex. CRISPR-based therapies for sickle cell, for example, require harvesting stem cells, editing them in a lab and reinfusing them after risky conditioning treatments that pose significant health threats to patients.

Knowing this, it’s evident that the complications would only multiply when editing multifactorial congenital diseases like cleft lip and palate—or complex diseases that manifest later in life, including cardiovascular disease and cancer.

Put simply, editing dozens of genes simultaneously would introduce severe threats to health, most likely exceeding the benefits. Whereas generative AI’s capabilities are accelerating at an exponential rate, gene-editing technologies like CRISPR face strict limitations in human biology. Our bodies have intricate, interdependent functions. This means correcting multiple genetic issues in tandem would disrupt essential biological functions in unpredictable, probably fatal ways.

No matter how advanced an AI tool may become in identifying genetic patterns, inherent biological constraints mean that multifactorial diseases will remain unsolvable. In this respect, Amodei’s prediction about curing genetic diseases will prove only partially correct.

Question 3: Will the AI’s success depend on people changing their behaviors?

One of the greatest challenges for AI applications in medicine isn’t technological but psychological: it’s about navigating human behavior and our tendency toward illogical or biased decisions. While we might assume that people will do everything they can to prolong their lives, human emotions and habits tell a different story.

Consider the management of chronic diseases like hypertension and diabetes. In this battle, technology can be a strong ally. Advanced home monitoring and wearable devices currently track blood pressure, glucose and oxygen levels with impressive accuracy. Soon, AI systems will analyze these readings, recommend diet and exercise adjustments and alert patients and clinicians when medication changes are needed.

But even the most sophisticated AI tools can’t force patients to reliably follow medical advice—or ensure that doctors will respond to every alert.

Humans are flawed, forgetful and fallible. Patients skip doses, ignore dietary recommendations and abandon exercise goals. On the clinician side, busy schedules, burnout and competing priorities often lead to missed opportunities for timely interventions. These behavioral factors add layers of unpredictability and unresponsiveness that even the most accurate AI systems cannot overcome.

And in addition to behavioral challenges, there are biological issues that limit the human lifespan. As we grow older, the protective caps on our chromosomes wear down, causing cells to stop functioning. Our cells’ energy sources, called mitochondria, gradually fail, weakening our bodies until vital organs cease to function. Short of replacing every cell and tissue in our bodies, our organs will eventually give out. And even if generative AI could tell us exactly what we needed to do to prevent these failings, it is unlikely people would consistently follow the recommendations.

For these reasons, Amodei’s boldest prediction—that longevity will double to 150 years within a decade—won’t happen. AI offers remarkable tools and intelligence. It will expand our knowledge far beyond anything we can imagine today. But ultimately, it cannot override the natural and complex limitations of human life: aging parts and illogical behaviors.

In the end, you should embrace AI promises when they build on scientific research. But when they violate biological or psychological principles, don’t believe the hype.

Misinformation on coronavirus is proving highly contagious

https://apnews.com/86f61f3ffb6173c29bc7db201c10f141?utm_source=Sailthru&utm_medium=email&utm_campaign=Newsletter%20Weekly%20Roundup:%20Healthcare%20Dive:%20Daily%20Dive%2008-01-2020&utm_term=Healthcare%20Dive%20Weekender

Misinformation on the coronavirus is proving highly contagious ...

As the world races to find a vaccine and a treatment for COVID-19, there is seemingly no antidote in sight for the burgeoning outbreak of coronavirus conspiracy theories, hoaxes, anti-mask myths and sham cures.

The phenomenon, unfolding largely on social media, escalated this week when President Donald Trump retweeted a false video about an anti-malaria drug being a cure for the virus and it was revealed that Russian intelligence is spreading disinformation about the crisis through English-language websites.

Experts worry the torrent of bad information is dangerously undermining efforts to slow the virus, whose death toll in the U.S. hit 150,000 Wednesday, by far the highest in the world, according to the tally kept by Johns Hopkins University. Over a half-million people have died in the rest of the world.

For most people, the virus causes only mild or moderate symptoms, such as fever and cough. For some older adults and people with existing health problems, it can cause more severe illness, including pneumonia.

Hard-hit Florida reported 216 deaths, breaking the single-day record it set a day earlier. Texas confirmed 313 additional deaths, pushing its total to 6,190, while South Carolina’s death toll passed 1,500 this week, more than doubling over the past month. In Georgia, hospitalizations have more than doubled since July 1.

“It is a real challenge in terms of trying to get the message to the public about what they can really do to protect themselves and what the facts are behind the problem,” said Michael Osterholm, head of the University of Minnesota’s Center for Infectious Disease Research and Policy.

He said the fear is that “people are putting themselves in harm’s way because they don’t believe the virus is something they have to deal with.”

Rather than fade away in the face of new evidence, the claims have flourished, fed by mixed messages from officials, transmitted by social media, amplified by leaders like Trump and mutating when confronted with contradictory facts.

“You don’t need masks. There is a cure,” Dr. Stella Immanuel promised in a video that promoted hydroxychloroquine. “You don’t need people to be locked down.”

The truth: Federal regulators last month revoked their authorization of the drug as an emergency treatment amid growing evidence it doesn’t work and can have deadly side effects. Even if it were effective, it wouldn’t negate the need for masks and other measures to contain the outbreak.

None of that stopped Trump, who has repeatedly praised the drug, from retweeting the video. Twitter and Facebook began removing the video Monday for violating policies on COVID-19 misinformation, but it had already been seen more than 20 million times.

Many of the claims in Immanuel’s video are widely disputed by medical experts. She has made even more bizarre pronouncements in the past, saying that cysts, fibroids and some other conditions can be caused by having sex with demons, that McDonald’s and Pokemon promote witchcraft, that alien DNA is used in medical treatments, and that half-human “reptilians” work in the government.

Other baseless theories and hoaxes have alleged that the virus isn’t real or that it’s a bioweapon created by the U.S. or its adversaries. One hoax from the outbreak’s early months claimed new 5G towers were spreading the virus through microwaves. Another popular story held that Microsoft founder Bill Gates plans to use COVID-19 vaccines to implant microchips in all 7 billion people on the planet.

Then there are the political theories — that doctors, journalists and federal officials are conspiring to lie about the threat of the virus to hurt Trump politically.

Social media has amplified the claims and helped believers find each other. The flood of misinformation has posed a challenge for Facebook, Twitter and other platforms, which have found themselves accused of censorship for taking down virus misinformation.

Facebook CEO Mark Zuckerberg was questioned about Immanuel’s video during an often-contentious congressional hearing Wednesday.

“We did take it down because it violates our policies,” Zuckerberg said.

U.S. Rep. David Cicilline, a Rhode Island Democrat leading the hearing, responded by noting that 20 million people saw the video before Facebook acted.

“Doesn’t that suggest that your platform is so big, that even with the right policies in place, you can’t contain deadly content?” Cicilline asked Zuckerberg.

It wasn’t the first video containing misinformation about the virus, and experts say it’s not likely to be the last.

A professionally made 26-minute video that alleges the government’s top infectious-disease expert, Dr. Anthony Fauci, manufactured the virus and shipped it to China was watched more than 8 million times before the platforms took action. The video, titled “Plandemic,” also warned that masks could make you sick — the false claim Facebook cited when it removed the video down from its site.

Judy Mikovits, the discredited doctor behind “Plandemic,” had been set to appear on the show “America This Week” on the Sinclair Broadcast Group. But the company, which operates TV stations in 81 U.S. markets, canned the segment, saying it was “not appropriate” to air.

This week, U.S. government officials speaking on condition of anonymity cited what they said was a clear link between Russian intelligence and websites with stories designed to spread disinformation on the coronavirus in the West. Russian officials rejected the accusations.

Of all the bizarre and myriad claims about the virus, those regarding masks are proving to be among the most stubborn.

New York City resident Carlos Lopez said he wears a mask when required to do so but doesn’t believe it is necessary.

“They’re politicizing it as a tool,” he said. “I think it’s more to try to get Trump to lose. It’s more a scare tactic.”

He is in the minority. A recent AP/NORC poll said 3 in 4 Americans — Democrats and Republicans alike — support a national mask mandate.

Still, mask skeptics are a vocal minority and have come together to create social media pages where many false claims about mask safety are shared. Facebook has removed some of the pages — such as the group Unmasking America!, which had nearly 10,000 members — but others remain.

Early in the pandemic, medical authorities themselves were the source of much confusion regarding masks. In February, officials like the U.S. surgeon general urged Americans not to stockpile masks because they were needed by medical personnel and might not be effective in everyday situations.

Public health officials changed their tune when it became apparent that the virus could spread among people showing no symptoms.

Yet Trump remained reluctant to use a mask, mocked his rival Joe Biden for wearing one and suggested people might be covering their faces just to hurt him politically. He did an abrupt about-face this month, claiming that he had always supported masks — then later retweeted Immanuel’s video against masks.

The mixed signals hurt, Fauci acknowledged in an interview with NPR this month.

“The message early on became confusing,” he said.

Many of the claims around masks allege harmful effects, such as blocked oxygen flow or even a greater chance of infection. The claims have been widely debunked by doctors.

Dr. Maitiu O Tuathail of Ireland grew so concerned about mask misinformation he posted an online video of himself comfortably wearing a mask while measuring his oxygen levels. The video has been viewed more than 20 million times.

“While face masks don’t lower your oxygen levels. COVID definitely does,” he warned.

Yet trusted medical authorities are often being dismissed by those who say requiring people to wear masks is a step toward authoritarianism.

“Unless you make a stand, you will be wearing a mask for the rest of your life,” tweeted Simon Dolan, a British businessman who has sued the government over its COVID-19 restrictions.

Trump’s reluctant, ambivalent and late embrace of masks hasn’t convinced some of his strongest supporters, who have concocted ever more elaborate theories to explain his change of heart. Some say he was actually speaking in code and doesn’t really support masks.

O Tuathail witnessed just how unshakable COVID-19 misinformation can be when, after broadcasting his video, he received emails from people who said he cheated or didn’t wear the mask long enough to feel the negative effects.

That’s not surprising, according to University of Central Florida psychology professor Chrysalis Wright, who studies misinformation. She said conspiracy theory believers often engage in mental gymnastics to make their beliefs conform with reality.

“People only want to hear what they already think they know,” she said. 

 

 

 

Cartoon – Today’s Retirement Reality

It's such a bummer, but it looks like I'll have to work for the ...