Time to Say Goodbye to Some Insurers’ Waivers for Covid Treatment Fees

Just as other industries are rolling back some consumer-friendly changes made early in the pandemic — think empty middle seats on airplanes — so, too, are health insurers.

Many voluntarily waived  all deductibles, copayments and other costs for insured patients who fell ill with covid-19 and needed hospital care, doctor visits, medications or other treatment.

Setting aside those fees was a good move from a public relations standpoint. The industry got credit for helping customers during tough times. And it had political and financial benefits for insurers, too.

But nothing lasts forever.

Starting at the end of last year — and continuing into the spring — a growing number of insurers are quietly ending those fee waivers for covid treatment on some or all policies.

When it comes to treatment, more and more consumers will find that the normal course of deductibles, copayments and coinsurance will apply,” said Sabrina Corlette, research professor and co-director of the Center on Health Insurance Reforms at Georgetown University.

Even so, “the good news is that vaccinations and most covid tests should still be free,” added Corlette.

That’s because federal law requires insurers to waive costs for covid testing and vaccination.

Guidance issued early in President Joe Biden’s term reinforced that Trump administration rule about waiving cost sharing for testing and said it applies even in situations in which an asymptomatic person wants a test before, say, visiting a relative.

But treatment is different.

Insurers voluntarily waived those costs, so they can decide when to reinstate them.

Indeed, the initial step not to charge treatment fees may have preempted any effort by the federal government to mandate it, said Cynthia Cox, a vice president at KFF and director for its program on the Affordable Care Act.

In a study released in November, researchers found about 88% of people covered by insurance plans — those bought by individuals and some group plans offered by employers — had policies that waived such payments at some point during the pandemic, said Cox, a co-author. But many of those waivers were expected to expire by the end of the year or early this year.

Some did.

Anthem, for example, stopped them at the end of January. UnitedHealth, another of the nation’s largest insurers, began rolling back waivers in the fall, finishing up by the end of March. Deductible-free inpatient treatment for covid through Aetna expired Feb. 28.

A few insurers continue to forgo patient cost sharing in some types of policies. Humana, for example, has left the cost-sharing waiver in place for Medicare Advantage members, but dropped it Jan. 1 for those in job-based group plans.

Not all are making the changes.

For example, Premera Blue Cross in Washington and Sharp Health Plan in California have extended treatment cost waivers through June. Kaiser Permanente said it is keeping its program in place for members diagnosed with covid and has not set an end date. Meanwhile, UPMC in Pittsburgh planned to continue to waive all copayments and deductibles for in-network treatment through April 20.

What It All Means

Waivers may result in little savings for people with mild cases of covid that are treated at home. But the savings for patients who fall seriously ill and wind up in the hospital could be substantial.

Emergency room visits and hospitalization are expensive, and many insured patients must pay a portion of those costs through annual deductibles before full coverage kicks in.

Deductibles have been on the rise for years. Single-coverage deductibles for people who work for large employers average $1,418, while those for employees of small firms average $2,295, according to a survey of employers by KFF. (KHN is an editorially independent program of KFF.)

Annual deductibles for Affordable Care Act plans are generally higher, depending on the plan type.

Both kinds of coverage also include copayments, which are flat-dollar amounts, and often coinsurance, which is a percentage of the cost of office visits, hospital stays and prescription drugs.

Ending the waivers for treatment “is a big deal if you get sick,” said Robert Laszewski, an insurance industry consultant in Maryland. “And then you find out you have to pay $5,000 out-of-pocket that your cousin didn’t two months ago.”

Costs and Benefits

Still, those patient fees represent only a slice of the overall cost of caring for a hospitalized patient with covid.

While it helped patients’ cash flow, insurers saw other kinds of benefits.

For one thing, insurers recognized early on that patients — facing stay-at-home orders and other restrictions — were avoiding medical care in droves, driving down what insurers had to fork out for care.

I think they were realizing they would be reporting extraordinarily good profits because they could see utilization dropping like a rock,” said Laszewski. “Doctors, hospitals, restaurants and everyone else were in big trouble. So, it was good politics to waive copays and deductibles.”

Besides generating goodwill, insurers may benefit in another way.

Under the ACA, insurers are required to spend at least 80% of their premium revenue on direct health care, rather than on marketing and administration. (Large group plans must spend 85%.)

By waiving those fees, insurers’ own spending went up a bit, potentially helping offset some share of what are expected to be hefty rebates this summer. That’s because insurers whose spending on direct medical care falls short of the ACA’s threshold must issue rebates by Aug. 1 to the individuals or employers who purchased the plans.

A record $2.5 billion was rebated for policies in effect in 2019, with the average rebate per person coming in at about $219.

Knowing their spending was falling during the pandemic helped fuel decisions to waive patient copayments for treatment, since insurers knew “they would have to give this money back in one form or another because of the rebates,” Cox said.

It’s a mixed bag for consumers.

“If they completely offset the rebates through waiving cost sharing, then it strictly benefits only those with covid who needed significant treatment,” noted Cox. “But, if they issue rebates, there’s more broad distribution.”

Even with that, insurers can expect to send a lot back in rebates this fall.

In a report out this week, KFF estimated that insurers may owe $2.1 billion in rebates for last year’s policies, the second-highest amount issued under the ACA. Under the law, rebate amounts are based on three years of financial data and profits. Final numbers aren’t expected until later in the year.

The rebates “are likely driven in part by suppressed health care utilization during the COVID-19 pandemic,” the report says.

Still, economist Joe Antos at the American Enterprise Institute says waiving the copays and deductibles may boost goodwill in the public eye more than rebates. “It’s a community benefit they could get some credit for,” said Antos, whereas many policyholders who get a small rebate check may just cash it and “it doesn’t have an impact on how they think about anything.”

Europe to set a vaccine passport standard

Vaccine passports may dictate future of stepping out - The Economic Times

Europe seems poised to set the global standard for vaccine passports, now that European Commission President Ursula von der Leyen has signaled that vaccinated Americans will be allowed to travel to the continent this summer.

Why it matters: Opening up travel to vaccinated Americans will bring new urgency to creating some kind of trusted means for people to prove they’ve been vaccinated, Axios’ Felix Salmon reports.

The big picture: There will probably never be a single credential that most people use to prove they’ve been vaccinated, for every purpose.

  • But the EU’s system will help set a standard for a proof of vaccination that’s both easily accessible and difficult to forge.
  • The U.S. is being closely consulted on the European passport, so any future American system will likely use similar protocols.

Details: Informal mechanisms like simply asking someone whether they’re had a shot can suffice in many situations. A system for international travel will likely be far more stringent. And there’s a wide middle, too.

  • Other activities that don’t need the same rigorous standards as international travel could rely on the CDC’s vaccination cards; options like a printed QR code, similar to what’s been proposed by PathCheck; or a digital QR code, like the ones created by CommonPass or the Vaccine Credential Initiative.

The bottom line: The world of vaccine passports is almost certainly going to end up as a mishmash of different credentials for different activities, rather than a single credential used by everybody for everything.

Cartoon – Preventable Diseases

Sack cartoon: Vaccinations | Star Tribune

Cartoon – Trailing with the Sheeples

Herd Immunity | Cartoon | mtexpress.com

Cartoon – No Vaccine for Stupidity

Anti-Vaxxers vs. Reality | KQED

‘Distancing isn’t helping you’: Indoor COVID-19 exposure risk same at 6, 60 feet, MIT researcher says

Risk of COVID-19 indoors is the same at 6 feet and 60 feet apart even when  wearing a mask | Daily Mail Online

People who maintain 60 feet of distance from others indoors are no more protected than if they socially distanced by 6 feet, according to a peer-reviewed study published April 27 in the Proceedings of the National Academy of Science of the United States of America.

Cambridge-based Massachusetts Institute of Technology professors Martin Bazant and John Bush, PhD, developed a model to calculate indoor exposure risk to COVID-19 by factoring in the amount of time spent inside, air filtration and circulation, immunization, variant strains, mask use, and respiratory activity such as breathing, eating or talking.  

“We argue there really isn’t much of a benefit to the six-foot rule, especially when people are wearing masks,” Mr. Bazant told CNBC. “It really has no physical basis because the air a person is breathing while wearing a mask tends to rise and comes down elsewhere in the room so you’re more exposed to the average background than you are to a person at a distance.”

As with smoking, even people wearing masks can be affected by secondhand smoke that makes its way around the enclosed area and lingers. The same logic applies to airborne droplets of the virus, according to the study. However, the study did note that mask use by both infected and susceptible people reduces “respiratory plumes” and thus increases the amount of time people may safely spend together indoors. 

When crafting guidelines, the CDC and World Health Organization have overlooked the amount of time spent indoors, Mr. Bazant claims.  

“What our analysis continues to show is that many spaces that have been shut down in fact don’t need to be,” Mr. Bazant said. “Oftentimes, the space is large enough, the ventilation is good enough, the amount of time people spend together is such that those spaces can be safely operated even at full capacity, and the scientific support for reduced capacity in those spaces is really not very good.”  

Opening windows or installing new fans to keep air moving may be just as effective or more effective than purchasing a new filtration system, Mr. Bazant said.

The CDC currently recommends staying at least 6 feet away from other people and wearing a mask to slow the spread of COVID-19, citing the fact that the virus spreads mainly among people who are in close contact for a prolonged period.  

“The distancing isn’t helping you that much and it’s also giving you a false sense of security, because you’re as safe at six feet as you are at 60 feet if you’re indoors. Everyone in that space is at roughly the same risk, actually,” Mr. Bazant said. 

After three rounds of peer review, Mr. Bazant says he hopes the study will influence social distancing policies.

Turning to primary care for vaccine distribution

https://mailchi.mp/da8db2c9bc41/the-weekly-gist-april-23-2021?e=d1e747d2d8

U.S. Starts Vaccine Rollout as High-Risk Health Care Workers Go First - The  New York Times

Now that we’ve entered a new phase of the vaccine rollout, with supply beginning to outstrip demand and all adults eligible to get vaccinated, we’re hearing from a number of health systems that their strategy is shifting from a centralized, scheduled approach to a more distributed, access-driven model. They’re recognizing that, in order to get the vaccine to harder-to-reach populations, and to convince reticent individuals to get vaccinated, they’ll need to lean more heavily on walk-in clinics, community settings, and yes—primary care physicians.

For some time, the primary care community has been complaining they’ve been overlooked in the national vaccination strategy, with health systems, pharmacy chains, and mass vaccination sites getting the lion’s share of doses. But now that we’re moving beyond the “if you build it, they will come” phase, and into the “please come get a shot” phase, we’ll need to lean much more heavily on primary care doctors, and the trusted relationships they have with their patients.

As one chief clinical officer told us this week, that means not just solving the logistical challenges of distributing vaccines to physician offices (which would be greatly aided by single-dose vials of vaccine, among other things), but planning for patient outreach. Simply advertising vaccine availability won’t suffice—now the playbook will have to include reaching out to patients to encourage them to sign up.

There will be workflow challenges as well, particularly while we await those single-dose shots—primary care clinics will likely need to schedule blocks of appointments, setting aside specific times of day or days of the week for vaccinations. The more distributed the vaccine rollout, the more operationally complex it will become. Health systems won’t be able to “get out of the vaccine business”, as one health system executive told us, because many have spent the past decade or more buying up primary care practices and rolling out urgent care locations. Now those assets must be enlisted in the service of vaccination rollout.

Health systems will have to orchestrate a “pull” strategy for vaccines, rather than the vaccination “push” they’ve been conducting for the past several months. To put it in military terms, the vaccination “air war” is over—now it’s time for what’s likely to be a protracted and difficult “ground campaign”.
 

Entering a new phase of the vaccine rollout

https://mailchi.mp/da8db2c9bc41/the-weekly-gist-april-23-2021?e=d1e747d2d8

Why some Americans are hesitant to receive the COVID-19 vaccine - Vital  Record

With more than 222M Americans having received at least one dose of COVID vaccine, and 27.5 percent of the population now fully vaccinated, we are now nearing a point at which vaccine supply will exceed demand, signaling a new phase of the rollout.

This week, for the first time since February, the daily rate of vaccinations slowed substantially, down about 11 percent from last week on a seven-day rolling average. Several states and counties are dialing back requests for new vaccine shipments, and the New York Times reported that some local health departments are beginning to shutter mass vaccination sites as appointment slots go unfilled.

On Friday, the White House’s COVID response coordinator, Jeff Zients, said that the Biden administration now expects “daily vaccination rates will fluctuate and moderate,” after several weeks of accelerating pace. In every state, everyone over the age of 16 is now eligible to be vaccinated, but experts expect that demand from the “vaccine-eager” population will run out over the next two weeksnecessitating a more aggressive campaign to distribute vaccines in hard-to-reach populations, and to convince vaccine skeptics to get the shot.

Vaccine hesitancy, like so many other issues related to the COVID pandemic, has now become starkly politicized—one recent survey found that 43 percent of Republicans “likely will never get” the vaccine, as opposed to only 5 percent of Democrats. Another 12 percent of those surveyed, regardless of party identification, say they plan to “see how it goes” before getting the vaccine, a subset that will surely be unnerved by continued doubts about the safety of the Johnson & Johnson (J&J) vaccine.

An expert advisory panel on Friday recommended that use of the J&J shot be resumed, but advised that a warning be included about potential risk of rare blood clots in women under 50. The first three months of the COVID vaccination campaign have been a staggering success—but getting from 27 percent fully vaccinated to the 80 percent needed for “herd immunity” will likely be a much tougher slog.

U.S. lifts pause on Johnson & Johnson’s coronavirus vaccine

The CDC and FDA on Friday lifted the recommended pause on use of Johnson & Johnson’s coronavirus vaccine, saying the benefits of the shot outweigh the risk of a rare blood clot disorder.

Why it matters: The move clears the way for states to immediately resume administering the one-shot vaccine.

  • The Johnson & Johnson shot had been seen as an important tool to fill gaps in the U.S. vaccination effort. But between the pause in its use and repeated manufacturing problems, its role in that effort is shrinking.

Driving the news: J&J shots have been paused for about two weeks, in response to reports that they may have caused serious blood clots in a small number of patients.

  • Only six people had experienced those blood clots at the time of the pause. The CDC said Friday that there have been nine additional cases.
  • Regulators said the number is small enough to safely resume the use of J&J’s vaccine.

What they’re saying: Safety is our top priority. This pause was an example of our extensive safety monitoring working as they were designed to work — identifying even these small number of cases,” said acting FDA Commissioner Janet Woodcock.

  • “We’ve lifted the pause based on the FDA and CDC’s review of all available data and in consultation with medical experts and based on recommendations from the CDC’s Advisory Committee on Immunization Practices,” she said.
  • “We are confident that this vaccine continues to meet our standards for safety, effectiveness and quality.”

What’s next: Regulators said health care providers administering the shot and vaccine recipients should review revised fact sheets about the J&J vaccine, which includes information about the rare blood clot disorder.

  • That heightened attention is important because the standard treatment for blood clots can make this particular type of clot worse.

Yes, but: J&J was already a relatively small part of the overall domestic vaccination effort, in part because the company missed some of its early manufacturing targets.

  • Multiple problems have since emerged at a Baltimore facility that makes a key ingredient for the vaccine, which could sideline production for weeks.

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

In scramble to respond to Covid-19, hospitals turned to models with high  risk of bias - MedCity News

Of 26 health systems surveyed by MedCity News, nearly half used automated tools to respond to the Covid-19 pandemic, but none of them were regulated. Even as some hospitals continued using these algorithms, experts cautioned against their use in high-stakes decisions.

A year ago, Michigan Medicine faced a dire situation. In March of 2020, the health system predicted it would have three times as many patients as its 1,000-bed capacity — and that was the best-case scenario. Hospital leadership prepared for this grim prediction by opening a field hospital in a nearby indoor track facility, where patients could go if they were stable, but still needed hospital care. But they faced another predicament: How would they decide who to send there?

Two weeks before the field hospital was set to open, Michigan Medicine decided to use a risk model developed by Epic Systems to flag patients at risk of deterioration. Patients were given a score of 0 to 100, intended to help care teams determine if they might need an ICU bed in the near future. Although the model wasn’t developed specifically for Covid-19 patients, it was the best option available at the time, said Dr. Karandeep Singh, an assistant professor of learning health sciences at the University of Michigan and chair of Michigan Medicine’s clinical intelligence committee. But there was no peer-reviewed research to show how well it actually worked.

Researchers tested it on over 300 Covid-19 patients between March and May. They were looking for scores that would indicate when patients would need to go to the ICU, and if there was a point where patients almost certainly wouldn’t need intensive care.

“We did find a threshold where if you remained below that threshold, 90% of patients wouldn’t need to go to the ICU,” Singh said. “Is that enough to make a decision on? We didn’t think so.”

But if the number of patients were to far exceed the health system’s capacity, it would be helpful to have some way to assist with those decisions.

“It was something that we definitely thought about implementing if that day were to come,” he said in a February interview.

Thankfully, that day never came.

The survey
Michigan Medicine is one of 80 hospitals contacted by MedCity News between January and April in a survey of decision-support systems implemented during the pandemic. 
Of the 26 respondents, 12 used machine learning tools or automated decision systems as part of their pandemic response. Larger hospitals and academic medical centers used them more frequently.

Faced with scarcities in testing, masks, hospital beds and vaccines, several of the hospitals turned to models as they prepared for difficult decisions. The deterioration index created by Epic was one of the most widely implemented — more than 100 hospitals are currently using it — but in many cases, hospitals also formulated their own algorithms.

They built models to predict which patients were most likely to test positive when shortages of swabs and reagents backlogged tests early in the pandemic. Others developed risk-scoring tools to help determine who should be contacted first for monoclonal antibody treatment, or which Covid patients should be enrolled in at-home monitoring programs.

MedCity News also interviewed hospitals on their processes for evaluating software tools to ensure they are accurate and unbiased. Currently, the FDA does not require some clinical decision-support systems to be cleared as medical devices, leaving the developers of these tools and the hospitals that implement them responsible for vetting them.

Among the hospitals that published efficacy data, some of the models were only evaluated through retrospective studies. This can pose a challenge in figuring out how clinicians actually use them in practice, and how well they work in real time. And while some of the hospitals tested whether the models were accurate across different groups of patients — such as people of a certain race, gender or location — this practice wasn’t universal.

As more companies spin up these models, researchers cautioned that they need to be designed and implemented carefully, to ensure they don’t yield biased results.

An ongoing review of more than 200 Covid-19 risk-prediction models found that the majority had a high risk of bias, meaning the data they were trained on might not represent the real world.

“It’s that very careful and non-trivial process of defining exactly what we want the algorithm to be doing,” said Ziad Obermeyer, an associate professor of health policy and management at UC Berkeley who studies machine learning in healthcare. “I think an optimistic view is that the pandemic functions as a wakeup call for us to be a lot more careful in all of the ways we’ve talked about with how we build algorithms, how we evaluate them, and what we want them to do.”

Algorithms can’t be a proxy for tough decisions
Concerns about bias are not new to healthcare. In a paper published two years ago
, Obermeyer found a tool used by several hospitals to prioritize high-risk patients for additional care resources was biased against Black patients. By equating patients’ health needs with the cost of care, the developers built an algorithm that yielded discriminatory results.

More recently, a rule-based system developed by Stanford Medicine to determine who would get the Covid-19 vaccine first ended up prioritizing administrators and doctors who were seeing patients remotely, leaving out most of its 1,300 residents who had been working on the front lines. After an uproar, the university attributed the errors to a “complex algorithm,” though there was no machine learning involved.

Both examples highlight the importance of thinking through what exactly a model is designed to do — and not using them as a proxy to avoid the hard questions.

“The Stanford thing was another example of, we wanted the algorithm to do A, but we told it to do B. I think many health systems are doing something similar,” Obermeyer said. “You want to give the vaccine first to people who need it the most — how do we measure that?”

The urgency that the pandemic created was a complicating factor.  With little information and few proven systems to work with in the beginning, health systems began throwing ideas at the wall to see what works. One expert questioned whether people might be abdicating some responsibility to these tools.

“Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted,” said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview. “Tough decisions are going to be made, I don’t think there are any doubts about that. But what are those tough decisions? We don’t actually name what constraints we’re hitting up against.”

The wild, wild west
There currently is no gold standard for how hospitals should implement machine learning tools, and little regulatory oversight for models designed to support physicians’ decisions, resulting in an environment that Mathur described as the “wild, wild west.”

How these systems were used varied significantly from hospital to hospital.

Early in the pandemic, Cleveland Clinic used a model to predict which patients were most likely to test positive for the virus as tests were limited. Researchers developed it using health record data from more than 11,000 patients in Ohio and Florida, including 818 who tested positive for Covid-19. Later, they created a similar risk calculator to determine which patients were most likely to be hospitalized for Covid-19, which was used to prioritize which patients would be contacted daily as part of an at-home monitoring program.

Initially, anyone who tested positive for Covid-19 could enroll in this program, but as cases began to tick up, “you could see how quickly the nurses and care managers who were running this program were overwhelmed,” said Dr. Lara Jehi, Chief Research Information Officer at Cleveland Clinic. “When you had thousands of patients who tested positive, how could you contact all of them?”

While the tool included dozens of factors, such as a patient’s age, sex, BMI, zip code, and whether they smoked or got their flu shot, it’s also worth noting that demographic information significantly changed the results. For example, a patient’s race “far outweighs” any medical comorbidity when used by the tool to estimate hospitalization risk, according to a paper published in Plos One.  Cleveland Clinic recently made the model available to other health systems.

Others, like Stanford Health Care and 731-bed Santa Clara County Medical Center, started using Epic’s clinical deterioration index before developing their own Covid-specific risk models. At one point, Stanford developed its own risk-scoring tool, which was built using past data from other patients who had similar respiratory diseases, such as the flu, pneumonia, or acute respiratory distress syndrome. It was designed to predict which patients would need ventilation within two days, and someone’s risk of dying from the disease at the time of admission.

Stanford tested the model to see how it worked on retrospective data from 159 patients that were hospitalized with Covid-19, and cross-validated it with Salt Lake City-based Intermountain Healthcare, a process that took several months. Although this gave some additional assurance — Salt Lake City and Palo Alto have very different populations, smoking rates and demographics — it still wasn’t representative of some patient groups across the U.S.

“Ideally, what we would want to do is run the model specifically on different populations, like on African Americans or Hispanics and see how it performs to ensure it’s performing the same for different groups,” Tina Hernandez-Boussard, an associate professor of medicine, biomedical data science and surgery at Stanford, said in a February interview. “That’s something we’re actively seeking. Our numbers are still a little low to do that right now.”

Stanford planned to implement the model earlier this year, but ultimately tabled it as Covid-19 cases fell.

‘The target is moving so rapidly’
Although large medical centers were more likely to have implemented automated systems, there were a few notable holdouts. For example, UC San Francisco Health, Duke Health and Dignity Health all said they opted not to use risk-prediction models or other machine learning tools in their pandemic responses.

“It’s pretty wild out there and I’ll be honest with you —  the dynamics are changing so rapidly,” said Dr. Erich Huang, chief officer for data quality at Duke Health and director of Duke Forge. “You might have a model that makes sense for the conditions of last month but do they make sense for the conditions of next month?”

That’s especially true as new variants spread across the U.S., and more adults are vaccinated, changing the nature and pace of the disease. But other, less obvious factors might also affect the data. For instance, Huang pointed to big differences in social mobility across the state of North Carolina, and whether people complied with local restrictions. Differing social and demographic factors across communities, such as where people work and whether they have health insurance, can also affect how a model performs.

“There are so many different axes of variability, I’d feel hard pressed to be comfortable using machine learning or AI at this point in time,” he said. “We need to be careful and understand the stakes of what we’re doing, especially in healthcare.”

Leadership at one of the largest public hospitals in the U.S., 600-bed LAC+USC Medical Center in Los Angeles, also steered away from using predictive models, even as it faced an alarming surge in cases over the winter months.

At most, the hospital used alerts to remind physicians to wear protective equipment when a patient has tested positive for Covid-19.

“My impression is that the industry is not anywhere near ready to deploy fully automated stuff just because of the risks involved,” said Dr. Phillip Gruber, LAC+USC’s chief medical information officer. “Our institution and a lot of institutions in our region are still focused on core competencies. We have to be good stewards of taxpayer dollars.”

When the data itself is biased
Developers have to contend with the fact that any model developed in healthcare will be biased, because the data itself is biased; how people access and interact with health systems in the U.S. is fundamentally unequal.

How that information is recorded in electronic health record systems (EHR) can also be a source of bias, NYU’s Mathur said. People don’t always self-report their race or ethnicity in a way that fits neatly within the parameters of an EHR. Not everyone trusts health systems, and many people struggle to even access care in the first place.

“Demographic variables are not going to be sharply nuanced. Even if they are… in my opinion, they’re not clean enough or good enough to be nuanced into a model,” Mathur said.

The information hospitals have had to work with during the pandemic is particularly messy. Differences in testing access and missing demographic data also affect how resources are distributed and other responses to the pandemic.

“It’s very striking because everything we know about the pandemic is viewed through the lens of number of cases or number of deaths,” UC Berkeley’s Obermeyer said. “But all of that depends on access to testing.”

At the hospital level, internal data wouldn’t be enough to truly follow whether an algorithm to predict adverse events from Covid-19 was actually working. Developers would have to look at social security data on mortality, or whether the patient went to another hospital, to track down what happened.

“What about the people a physician sends home —  if they die and don’t come back?” he said.

Researchers at Mount Sinai Health System tested a machine learning tool to predict critical events in Covid-19 patients —  such as dialysis, intubation or ICU admission — to ensure it worked across different patient demographics. But they still ran into their own limitations, even though the New York-based hospital system serves a diverse group of patients.

They tested how the model performed across Mount Sinai’s different hospitals. In some cases, when the model wasn’t very robust, it yielded different results, said Benjamin Glicksberg, an assistant professor of genetics and genomic sciences at Mount Sinai and a member of its Hasso Plattner Institute for Digital Health.

They also tested how it worked in different subgroups of patients to ensure it didn’t perform disproportionately better for patients from one demographic.

“If there’s a bias in the data going in, there’s almost certainly going to be a bias in the data coming out of it,” he said in a Zoom interview. “Unfortunately, I think it’s going to be a matter of having more information that can approximate these external factors that may drive these discrepancies. A lot of that is social determinants of health, which are not captured well in the EHR. That’s going to be critical for how we assess model fairness.”

Even after checking for whether a model yields fair and accurate results, the work isn’t done yet. Hospitals must continue to validate continuously to ensure they’re still working as intended — especially in a situation as fast-moving as a pandemic.

A bigger role for regulators
All of this is stirring up a broader discussion about how much of a role regulators should have in how decision-support systems are implemented.

Currently, the FDA does not require most software that provides diagnosis or treatment recommendations to clinicians to be regulated as a medical device. Even software tools that have been cleared by the agency lack critical information on how they perform across different patient demographics. 

Of the hospitals surveyed by MedCity News, none of the models they developed had been cleared by the FDA, and most of the external tools they implemented also hadn’t gone through any regulatory review.

In January, the FDA shared an action plan for regulating AI as a medical device. Although most of the concrete plans were around how to regulate algorithms that adapt over time, the agency also indicated it was thinking about best practices, transparency, and methods to evaluate algorithms for bias and robustness.

More recently, the Federal Trade Commission warned that it could crack down on AI bias, citing a paper that AI could worsen existing healthcare disparities if bias is not addressed.

“My experience suggests that most models are put into practice with very little evidence of their effects on outcomes because they are presumed to work, or at least to be more efficient than other decision-making processes,” Kellie Owens, a researcher for Data & Society, a nonprofit that studies the social implications of technology, wrote in an email. “I think we still need to develop better ways to conduct algorithmic risk assessments in medicine. I’d like to see the FDA take a much larger role in regulating AI and machine learning models before their implementation.”

Developers should also ask themselves if the communities they’re serving have a say in how the system is built, or whether it is needed in the first place. The majority of hospitals surveyed did not share with patients if a model was used in their care or involve patients in the development process.

In some cases, the best option might be the simplest one: don’t build.

In the meantime, hospitals are left to sift through existing published data, preprints and vendor promises to decide on the best option. To date, Michigan Medicine’s paper is still the only one that has been published on Epic’s Deterioration Index.

Care teams there used Epic’s score as a support tool for its rapid response teams to check in on patients. But the health system was also looking at other options.

“The short game was that we had to go with the score we had,” Singh said. “The longer game was, Epic’s deterioration index is proprietary. That raises questions about what is in it.”