In a matter of months, ChatGPT has radically altered our nation’s views on artificial intelligence—uprooting old assumptions about AI’s limitations and kicking the door wide open for exciting new possibilities.
One aspect of our lives sure to be touched by this rapid acceleration in technology is U.S. healthcare. But the extent to which tech will improve our nation’s health depends on whether regulators embrace the future or cling stubbornly to the past.
Why our minds live in the past
In the 1760s, Scottish inventor James Watt revolutionized the steam engine, marking an extraordinary leap in engineering. But Watt knew that if he wanted to sell his innovation, he needed to convince potential buyers of its unprecedented power. With a stroke of marketing genius, he began telling people that his steam engine could replace 10 cart-pulling horses. People at time immediately understood that a machine with 10 “horsepower” must be a worthy investment. Watt’s sales took off. And his long-since-antiquated meaurement of power remains with us today.
Even now, people struggle to grasp the breakthrough potential of revolutionary innovations. When faced with a new and powerful technology, people feel more comfortable with what they know. Rather than embracing an entirely different mindset, they remain stuck in the past, making it difficult to harness the full potential of future opportunities.
Too often, that’s exactly how U.S. government agencies go about regulating advances in healthcare. In medicine, the consequences of applying 20th-century assumptions to 21st-century innovations prove fatal.
Here are three ways regulators do damage by failing to keep up with the times:
1. Devaluing ‘virtual visits’
Established in 1973 to combat drug abuse, the Drug Enforcement Administration (DEA) now faces an opioid epidemic that claims more than 100,000 lives a year.
One solution to this deadly problem, according to public health advocates, combines modern information technology with an effective form of addiction treatment.
Thanks to the Covid-19 Public Health Emergency (PHE) declaration, telehealth use skyrocketed during the pandemic. Out of necessity, regulators relaxed previous telemedicine restrictions, allowing more patients to access medical services remotely while enabling doctors to prescribe controlled substances, including buprenorphine, via video visits.
For people battling drug addiction, buprenorphine is a “Goldilocks” medication with just enough efficacy to prevent withdrawal yet not enough to result in severe respiratory depression, overdose or death. Research from the National Institutes of Health (NIH) found that buprenorphine improves retention in drug-treatment programs. It has helped thousands of people reclaim their lives.
But because this opiate produces slight euphoria, drug officials worry it could be abused and that telemedicine prescribing will make it easier for bad actors to push buprenorphine onto the black market. Now with the PHE declaration set to expire, the DEA has laid out plans to limit telehealth prescribing of buprenorphine.
The proposed regulations would let doctors prescribe a 30-day course of the drug via telehealth, but would mandate an in-person visit with a doctor for any renewals. The agency believes this will “prevent the online overprescribing of controlled medications that can cause harm.”
The DEA’s assumption that an in-person visit is safer and less corruptible than a virtual visit is outdated and contradicted by clinical research. A recent NIH study, for example, found that overdose deaths involving buprenorphine did not proportionally increase during the pandemic. Likewise, a Harvard study found that telemedicine is as effective as in-person care for opioid use disorder.
Of course, regulators need to monitor the prescribing frequency of controlled substances and conduct audits to weed out fraud. Furthermore, they should demand that prescribing physicians receive proper training and document their patient-education efforts concerning medical risks.
But these requirements should apply to all clinicians, regardless of whether the patient is physically present. After all, abuses can happen as easily and readily in person as online.
The DEA needs to move its mindset into the 21st century because our nation’s outdated approach to addiction treatment isn’t working. More than 100,000 deaths a year prove it.
2. Restricting an unrestrainable new technology
Technologists predict that generative AI, like ChatGPT, will transform American life, drastically altering our economy and workforce. I’m confident it also will transform medicine, giving patients greater (a) access to medical information and (b) control over their own health.
So far, the rate of progress in generative AI has been staggering. Just months ago, the original version of ChatGPT passed the U.S. medical licensing exam, but barely. Weeks ago, Google’s Med-PaLM 2 achieved an impressive 85% on the same exam, placing it in the realm of expert doctors.
With great technological capability comes great fear, especially from U.S. regulators. At the Health Datapalooza conference in February, Food and Drug Administration (FDA) Commissioner Robert M. Califf emphasized his concern when he pointed out that ChatGPT and similar technologies can either aid or exacerbate the challenge of helping patients make informed health decisions.
Worried comments also came from Federal Trade Commission, thanks in part to a letter signed by billionaires like Elon Musk and Steve Wozniak. They posited that the new technology “poses profound risks to society and humanity.” In response, FTC chair Lina Khan pledged to pay close attention to the growing AI industry.
Attempts to regulate generative AI will almost certainly happen and likely soon. But agencies will struggle to accomplish it.
To date, U.S. regulators have evaluated hundreds of AI applications as medical devices or “digital therapeutics.” In 2022, for example, Apple received premarket clearance from the FDA for a new smartwatch feature that lets users know if their heart rhythm shows signs of atrial fibrillation (AFib). For each AI product that undergoes FDA scrutiny, the agency tests the embedded algorithms for effectiveness and safety, similar to a medication.
ChatGPT is different. It’s not a medical device or digital therapy programmed to address a specific or measurable medical problem. And it doesn’t contain a simple algorithm that regulators can evaluate for efficacy and safety. The reality is that any GPT-4 user today can type in a query and receive detailed medical advice in seconds. ChatGPT is a broad facilitator of information, not a narrowly focused, clinical tool. Therefore, it defies the types of analysis regulators traditionally apply.
In that way, ChatGPT is similar to the telephone. Regulators can evaluate the safety of smartphones, measuring how much electromagnetic radiation it gives off or whether the device, itself, poses a fire hazard. But they can’t regulate the safety of how people use it. Friends can and often do give each other terrible advice by phone.
Therefore, aside from blocking ChatGPT outright, there’s no way to stop individuals from asking it for a diagnosis, medication recommendation or help with deciding on alternative medical treatments. And while the technology has been temporarily banned in Italy, that’s unlikely to happen in the United States.
If we want to ensure the safety of ChatGPT, improve health and save lives, government agencies should focus on educating Americans on this technology rather than trying to restrict its usage.
3. Preventing doctors from helping more people
Doctors can apply for a medical license in any state, but the process is time-consuming and laborious. As a result, most physicians are licensed only where they live. That deprives patients in the other 49 states access to their medical expertise.
The reason for this approach dates back 240 years. When the Bill of Rights passed in 1791, the practice of medicine varied greatly by geography. So, states were granted the right to license physicians through their state boards.
In 1910, the Flexner report highlighted widespread failures of medical education and recommended a standard curriculum for all doctors. This process of standardization culminated in 1992 when all U.S. physicians were required to take and pass a set of national medical exams. And yet, 30 years later, fully trained and board-certified doctors still have to apply for a medical license in every state where they wish to practice medicine. Without a second license, a doctor in Chicago can’t provide care to a patient across a state border in Indiana, even if separated by mere miles.
The PHE declaration did allow doctors to provide virtual care to patients in other states. However, with that policy expiring in May, physicians will again face overly restrictive regulations held over from centuries past.
Given the advances in medicine, the availability of technology and growing shortage of skilled clinicians, these regulations are illogical and problematic. Heart attacks, strokes and cancer know no geographic boundaries. With air travel, people can contract medical illnesses far from home. Regulators could safely implement a common national licensing process—assuming states would recognize it and grant a medical license to any doctor without a history of professional impropriety.
But that’s unlikely to happen. The reason is financial. Licensing fees support state medical boards. And state-based restrictions limit competition from out of state, allowing local providers to drive up prices.
To address healthcare’s quality, access and affordability challenges, we need to achieve economies of scale. That would be best done by allowing all doctors in the U.S. to join one care-delivery pool, rather than retaining 50 separate ones.
Doing so would allow for a national mental-health service, giving people in underserved areas access to trained therapists and helping reduce the 46,000 suicides that take place in America each year.
Regulators need to catch up
Medicine is a complex profession in which errors kill people. That’s why we need healthcare regulations. Doctors and nurses need to be well trained, so that life-threatening medications can’t fall into the hands of people who will misuse them.
But when outdated thinking leads to deaths from drug overdoses, prevents patients from improving their own health and limits access to the nation’s best medical expertise, regulators need to recognize the harm they’re doing.
Healthcare is changing as technology races ahead. Regulators need to catch up.