Kansas City, MO-based electronic health record (EHR) company Cerner is now a business unit within Oracle, the Austin, TX-based software behemoth. Oracle, which already sells some software to insurers, hospitals, and public health departments, called Cerner the company’s “anchor asset”, and hopes to use it to expand its healthcare presence. Oracle co-founder and board chair Larry Ellison unveiled lofty plans to create a national health records database, but he didn’t detail how the company would get access to health records from non-Cerner systems, as interoperability standards haven’t been fully implemented.
The Gist: In addition to the challenge of entering the complex EHR and healthcare data market, Oracle now faces the challenge of rebuilding Cerner’s growth, and its clients’ confidence. Cerner lags Epic in terms of hospital market share: Epic holds about a third of the US hospital market, compared to Cerner’s 24 percent. Epic gained an additional 74 hospitals last year, compared to Cerner’s five.
Anecdotally, we know of several long-term Cerner health system clients who are either in the middle of, or planning for, a transition to Epic, which is seen by health system leaders as the superior EMR option. (In the words of one executive, “No CEO ever got fired for choosing Epic.”) If a stronger Cerner product emerges as a result of the acquisition, it could help to stem this tide.
This week, retail giant Walmart announced a partnership with Epic, the country’s most widely-used electronic health record (EHR) system, as the technology platform to support its health and wellness businesses. Epic will first be installed in four Walmart Health Center clinics slated to open in Florida early next year.
The company currently operates 20 health centers in Georgia, Arkansas and the Chicago area, offering an expanded range of services including comprehensive primary care, behavioral health, dental, hearing and vision care, as well as labs and other diagnostics. Skeptics have noted that Walmart has fallen behind in its ambitious plans to broadly roll out the expanded clinics, the first of which opened in an Atlanta exurb in 2019.
The partnership with Epic, which is used by more than 2,000 hospitals nationwide, signals that Walmart is serious about expanding its role as a healthcare provider—and sees opportunity in being able to share information and connect with health systems and doctors’ offices.
However, the vision of a “unified health record across care settings, geographies and multiple sources of health data” outlined by Walmart’s EVP of health and wellness may be more difficult to achieve than expected, if the experience of health systems, who have been stymied by upgrades and version mismatches in their quest for a unified EHR, is any indication.
Welcome, Walmart, to the wonderful world of EHRs—if you thought healthcare was complicated, just wait until you begin your first Epic install!
Rochester, Minn.-based Mayo Clinic was named the best smart hospital in the world in 2021 by Newsweek.
For the list, the magazine partnered with consumer research company Statista to find the 250 hospitals that best equip themselves for success with technology. Newsweek said the hospitals on the list are the ones to watch as they “lead in their use of [artificial intelligence], robotic surgery, digital imaging, telemedicine, smart buildings, information technology infrastructure and EHRs.”
The ranking, published June 9, is based on a survey that included recommendations from national and international sources in five categories: digital surgery, digital imaging, AI, telehealth and EHRs.
Of 26 health systems surveyed by MedCity News, nearly half used automated tools to respond to the Covid-19 pandemic, but none of them were regulated. Even as some hospitals continued using these algorithms, experts cautioned against their use in high-stakes decisions.
A year ago, Michigan Medicine faced a dire situation. In March of 2020, the health system predicted it would have three times as many patients as its 1,000-bed capacity — and that was the best-case scenario. Hospital leadership prepared for this grim prediction by opening a field hospital in a nearby indoor track facility, where patients could go if they were stable, but still needed hospital care. But they faced another predicament: How would they decide who to send there?
Two weeks before the field hospital was set to open, Michigan Medicine decided to use a risk model developed by Epic Systems to flag patients at risk of deterioration. Patients were given a score of 0 to 100, intended to help care teams determine if they might need an ICU bed in the near future. Although the model wasn’t developed specifically for Covid-19 patients, it was the best option available at the time, said Dr. Karandeep Singh, an assistant professor of learning health sciences at the University of Michigan and chair of Michigan Medicine’s clinical intelligence committee. But there was no peer-reviewed research to show how well it actually worked.
Researchers tested it on over 300 Covid-19 patients between March and May. They were looking for scores that would indicate when patients would need to go to the ICU, and if there was a point where patients almost certainly wouldn’t need intensive care.
“We did find a threshold where if you remained below that threshold, 90% of patients wouldn’t need to go to the ICU,” Singh said. “Is that enough to make a decision on? We didn’t think so.”
But if the number of patients were to far exceed the health system’s capacity, it would be helpful to have some way to assist with those decisions.
“It was something that we definitely thought about implementing if that day were to come,” he said in a February interview.
Thankfully, that day never came.
The survey Michigan Medicine is one of 80 hospitals contacted by MedCity News between January and April in a survey of decision-support systems implemented during the pandemic. Of the 26 respondents, 12 used machine learning tools or automated decision systems as part of their pandemic response. Larger hospitals and academic medical centers used them more frequently.
Faced with scarcities in testing, masks, hospital beds and vaccines, several of the hospitals turned to models as they prepared for difficult decisions. The deterioration index created by Epic was one of the most widely implemented — more than 100 hospitals are currently using it — but in many cases, hospitals also formulated their own algorithms.
They built models to predict which patients were most likely to test positive when shortages of swabs and reagents backlogged tests early in the pandemic. Others developed risk-scoring tools to help determine who should be contacted first for monoclonal antibody treatment, or which Covid patients should be enrolled in at-home monitoring programs.
MedCity News also interviewed hospitals on their processes for evaluating software tools to ensure they are accurate and unbiased. Currently, the FDA does not require some clinical decision-support systems to be cleared as medical devices, leaving the developers of these tools and the hospitals that implement them responsible for vetting them.
Among the hospitals that published efficacy data, some of the models were only evaluated through retrospective studies. This can pose a challenge in figuring out how clinicians actually use them in practice, and how well they work in real time. And while some of the hospitals tested whether the models were accurate across different groups of patients — such as people of a certain race, gender or location — this practice wasn’t universal.
As more companies spin up these models, researchers cautioned that they need to be designed and implemented carefully, to ensure they don’t yield biased results.
An ongoing review of more than 200 Covid-19 risk-prediction models found that the majority had a high risk of bias, meaning the data they were trained on might not represent the real world.
“It’s that very careful and non-trivial process of defining exactly what we want the algorithm to be doing,” said Ziad Obermeyer, an associate professor of health policy and management at UC Berkeley who studies machine learning in healthcare. “I think an optimistic view is that the pandemic functions as a wakeup call for us to be a lot more careful in all of the ways we’ve talked about with how we build algorithms, how we evaluate them, and what we want them to do.”
Algorithms can’t be a proxy for tough decisions Concerns about bias are not new to healthcare. In a paper published two years ago, Obermeyer found a tool used by several hospitals to prioritize high-risk patients for additional care resources was biased against Black patients. By equating patients’ health needs with the cost of care, the developers built an algorithm that yielded discriminatory results.
More recently, a rule-based system developed by Stanford Medicine to determine who would get the Covid-19 vaccine first ended up prioritizing administrators and doctors who were seeing patients remotely, leaving out most of its 1,300 residents who had been working on the front lines. After an uproar, the university attributed the errors to a “complex algorithm,” though there was no machine learning involved.
Both examples highlight the importance of thinking through what exactly a model is designed to do — and not using them as a proxy to avoid the hard questions.
“The Stanford thing was another example of, we wanted the algorithm to do A, but we told it to do B. I think many health systems are doing something similar,” Obermeyer said. “You want to give the vaccine first to people who need it the most — how do we measure that?”
The urgency that the pandemic created was a complicating factor. With little information and few proven systems to work with in the beginning, health systems began throwing ideas at the wall to see what works. One expert questioned whether people might be abdicating some responsibility to these tools.
“Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted,” said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview. “Tough decisions are going to be made, I don’t think there are any doubts about that. But what are those tough decisions? We don’t actually name what constraints we’re hitting up against.”
The wild, wild west There currently is no gold standard for how hospitals should implement machine learning tools, and little regulatory oversight for models designed to support physicians’ decisions, resulting in an environment that Mathur described as the “wild, wild west.”
How these systems were used varied significantly from hospital to hospital.
Early in the pandemic, Cleveland Clinic used a model to predict which patients were most likely to test positive for the virus as tests were limited. Researchers developed it using health record data from more than 11,000 patients in Ohio and Florida, including 818 who tested positive for Covid-19. Later, they created a similar risk calculator to determine which patients were most likely to be hospitalized for Covid-19, which was used to prioritize which patients would be contacted daily as part of an at-home monitoring program.
Initially, anyone who tested positive for Covid-19 could enroll in this program, but as cases began to tick up, “you could see how quickly the nurses and care managers who were running this program were overwhelmed,” said Dr. Lara Jehi, Chief Research Information Officer at Cleveland Clinic. “When you had thousands of patients who tested positive, how could you contact all of them?”
While the tool included dozens of factors, such as a patient’s age, sex, BMI, zip code, and whether they smoked or got their flu shot, it’s also worth noting that demographic information significantly changed the results. For example, a patient’s race “far outweighs” any medical comorbidity when used by the tool to estimate hospitalization risk, according to a paper published in Plos One. Cleveland Clinic recently made the model available to other health systems.
Others, like Stanford Health Care and 731-bed Santa Clara County Medical Center, started using Epic’s clinical deterioration index before developing their own Covid-specific risk models. At one point, Stanford developed its own risk-scoring tool, which was built using past data from other patients who had similar respiratory diseases, such as the flu, pneumonia, or acute respiratory distress syndrome. It was designed to predict which patients would need ventilation within two days, and someone’s risk of dying from the disease at the time of admission.
Stanford tested the model to see how it worked on retrospective data from 159 patients that were hospitalized with Covid-19, and cross-validated it with Salt Lake City-based Intermountain Healthcare, a process that took several months. Although this gave some additional assurance — Salt Lake City and Palo Alto have very different populations, smoking rates and demographics — it still wasn’t representative of some patient groups across the U.S.
“Ideally, what we would want to do is run the model specifically on different populations, like on African Americans or Hispanics and see how it performs to ensure it’s performing the same for different groups,” Tina Hernandez-Boussard, an associate professor of medicine, biomedical data science and surgery at Stanford, said in a February interview. “That’s something we’re actively seeking. Our numbers are still a little low to do that right now.”
Stanford planned to implement the model earlier this year, but ultimately tabled it as Covid-19 cases fell.
‘The target is moving so rapidly’ Although large medical centers were more likely to have implemented automated systems, there were a few notable holdouts. For example, UC San Francisco Health, Duke Health and Dignity Health all said they opted not to use risk-prediction models or other machine learning tools in their pandemic responses.
“It’s pretty wild out there and I’ll be honest with you — the dynamics are changing so rapidly,” said Dr. Erich Huang, chief officer for data quality at Duke Health and director of Duke Forge. “You might have a model that makes sense for the conditions of last month but do they make sense for the conditions of next month?”
That’s especially true as new variants spread across the U.S., and more adults are vaccinated, changing the nature and pace of the disease. But other, less obvious factors might also affect the data. For instance, Huang pointed to big differences in social mobility across the state of North Carolina, and whether people complied with local restrictions. Differing social and demographic factors across communities, such as where people work and whether they have health insurance, can also affect how a model performs.
“There are so many different axes of variability, I’d feel hard pressed to be comfortable using machine learning or AI at this point in time,” he said. “We need to be careful and understand the stakes of what we’re doing, especially in healthcare.”
Leadership at one of the largest public hospitals in the U.S., 600-bed LAC+USC Medical Center in Los Angeles, also steered away from using predictive models, even as it faced an alarming surge in cases over the winter months.
At most, the hospital used alerts to remind physicians to wear protective equipment when a patient has tested positive for Covid-19.
“My impression is that the industry is not anywhere near ready to deploy fully automated stuff just because of the risks involved,” said Dr. Phillip Gruber, LAC+USC’s chief medical information officer. “Our institution and a lot of institutions in our region are still focused on core competencies. We have to be good stewards of taxpayer dollars.”
When the data itself is biased Developers have to contend with the fact that any model developed in healthcare will be biased, because the data itself is biased; how people access and interact with health systems in the U.S. is fundamentally unequal.
How that information is recorded in electronic health record systems (EHR) can also be a source of bias, NYU’s Mathur said. People don’t always self-report their race or ethnicity in a way that fits neatly within the parameters of an EHR. Not everyone trusts health systems, and many people struggle to even access care in the first place.
“Demographic variables are not going to be sharply nuanced. Even if they are… in my opinion, they’re not clean enough or good enough to be nuanced into a model,” Mathur said.
The information hospitals have had to work with during the pandemic is particularly messy. Differences in testing access and missing demographic data also affect how resources are distributed and other responses to the pandemic.
“It’s very striking because everything we know about the pandemic is viewed through the lens of number of cases or number of deaths,” UC Berkeley’s Obermeyer said. “But all of that depends on access to testing.”
At the hospital level, internal data wouldn’t be enough to truly follow whether an algorithm to predict adverse events from Covid-19 was actually working. Developers would have to look at social security data on mortality, or whether the patient went to another hospital, to track down what happened.
“What about the people a physician sends home — if they die and don’t come back?” he said.
Researchers at Mount Sinai Health System tested a machine learning tool to predict critical events in Covid-19 patients — such as dialysis, intubation or ICU admission — to ensure it worked across different patient demographics. But they still ran into their own limitations, even though the New York-based hospital system serves a diverse group of patients.
They tested how the model performed across Mount Sinai’s different hospitals. In some cases, when the model wasn’t very robust, it yielded different results, said Benjamin Glicksberg, an assistant professor of genetics and genomic sciences at Mount Sinai and a member of its Hasso Plattner Institute for Digital Health.
They also tested how it worked in different subgroups of patients to ensure it didn’t perform disproportionately better for patients from one demographic.
“If there’s a bias in the data going in, there’s almost certainly going to be a bias in the data coming out of it,” he said in a Zoom interview. “Unfortunately, I think it’s going to be a matter of having more information that can approximate these external factors that may drive these discrepancies. A lot of that is social determinants of health, which are not captured well in the EHR. That’s going to be critical for how we assess model fairness.”
Even after checking for whether a model yields fair and accurate results, the work isn’t done yet. Hospitals must continue to validate continuously to ensure they’re still working as intended — especially in a situation as fast-moving as a pandemic.
A bigger role for regulators All of this is stirring up a broader discussion about how much of a role regulators should have in how decision-support systems are implemented.
Of the hospitals surveyed by MedCity News, none of the models they developed had been cleared by the FDA, and most of the external tools they implemented also hadn’t gone through any regulatory review.
“My experience suggests that most models are put into practice with very little evidence of their effects on outcomes because they are presumed to work, or at least to be more efficient than other decision-making processes,” Kellie Owens, a researcher for Data & Society, a nonprofit that studies the social implications of technology, wrote in an email. “I think we still need to develop better ways to conduct algorithmic risk assessments in medicine. I’d like to see the FDA take a much larger role in regulating AI and machine learning models before their implementation.”
Developers should also ask themselves if the communities they’re serving have a say in how the system is built, or whether it is needed in the first place. The majority of hospitals surveyed did not share with patients if a model was used in their care or involve patients in the development process.
In some cases, the best option might be the simplest one: don’t build.
In the meantime, hospitals are left to sift through existing published data, preprints and vendor promises to decide on the best option. To date, Michigan Medicine’s paper is still the only one that has been published on Epic’s Deterioration Index.
Care teams there used Epic’s score as a support tool for its rapid response teams to check in on patients. But the health system was also looking at other options.
“The short game was that we had to go with the score we had,” Singh said. “The longer game was, Epic’s deterioration index is proprietary. That raises questions about what is in it.”
In early 2013, Hoag Memorial Hospital Presbyterian in Orange County, California, joined with St. Joseph Health, a local Catholic hospital chain, amid enthusiastic promises that their affiliation would broaden access to care and improve the health of residents across the community.
Eight years later, Hoag says this vision of achieving “population health” is dead, and it wants out. It is embroiled in a legal battle for independence from Providence, a Catholic health system with 51 hospitals across seven states, which absorbed St. Joseph in 2016, bringing Hoag along with it.
In a lawsuit filed in Orange County Superior Court last May, Hoag argues that remaining a “captive affiliate” of the nation’s 10th-largest health system, headquartered nearly 1,200 miles away in Washington state, constrains its ability to meet the needs of the local population.
Hoag doctors say that Providence’s drive to standardize treatment decisions across its chain — largely through a shared Epic electronic records system — often conflicts with their own judgment of best medical practices. And they recoil against restrictions on reproductive care they say Providence illegally imposes on them through its adherence to the Catholic health directives established by the United States Conference of Catholic Bishops.
“Their large widespread system is very different than the laser focus Hoag has on taking care of its community,” said Hoag CEO Robert Braithwaite. “When Hoag needed speed and agility, we got inadequate responses or policies that were just wrong for us. We found ourselves frustrated with a big health system that had a generic approach to health care.”
Providence insists it wants to stay with Hoag, a financial powerhouse — even as the two sides engage in secret settlement talks that could end the marriage.
“We believe we are better together,” said Erik Wexler, president of Providence South, which includes the group’s operations in California, Texas and New Mexico. “The best way to do that is to collaborate.” He cited joint investments in Hoag Orthopedic Institute and in Be Well OC, a kind of mental health collaborative, as fruits of the affiliation.
“If we are separate,” Wexler added, “there is a chance we may begin to cannibalize each other and drive the cost of care up.”
Research over the past several years, however, has shown that it is the consolidation of hospitals into fewer and larger groups, with greater bargaining clout, that tends to raise medical prices — often with little improvement in the quality of care.
“Mergers are a self-centered pursuit of stability by hospitals and hospital systems that hope to get so big that they can survive the anarchy of U.S. health care,” said Alan Sager, a professor at Boston University’s School of Public Health.
Wexler argued that price increases linked to consolidation are less of a worry in Orange County, geographically small but densely populated with 3.2 million residents and 28 acute care hospitals. Given the proximity of so many hospitals, Wexler said, counterproductive duplication of medical services is more of a concern.
Unlike many local community hospitals that seek larger partners to survive, Hoag, one of Orange County’s premier medical institutions, is financially robust and perfectly able to stand on its own. It has the advantage of operating in one of Orange County’s most affluent areas, with two acute care hospitals and an orthopedic specialty hospital in Newport Beach and Irvine. It is the beneficiary of numerous wealthy donors, including bond market billionaire Bill Gross and thriller novelist Dean Koontz.
In 2020, Hoag’s net assets, essentially its net worth, stood at about $3.3 billion — nearly 20% of the total for all Providence-affiliated facilities, even though Hoag has only three of the group’s 51 hospitals. Hoag generated operating income of $38 million last year, while Providence posted a $306 million operating loss.
But Providence is hardly a financial weakling. It is sitting on a mountain of unrestricted cash and investments worth $15.3 billion as of Dec. 31. And despite its hefty reserves, it received $1.1 billion in coronavirus relief grants last year under the federal CARES Act, and millions more from the Federal Emergency Management Agency.
Providence does not own Hoag, since no money changed hands and their assets were not commingled. But Providence is able to keep Hoag from walking away because it has a majority on the governing body that was set up to oversee the original affiliation with St. Joseph.
Hoag executives also express frustration at what they describe as efforts by Providence to interfere with their financial, labor and supply decisions.
Providence, in turn, worries that “if Hoag disaffiliates with Providence, it has the potential to impact our credit rating,”Wexler said.
Despite its insistence on the value of the affiliation, Providence officials are said to be willing to end the affiliation in exchange for payment of an undisclosed amount that Hoag considers unwarranted. Wexler and Hoag executives declined to comment on their discussions. A trial start date has not been set, but on April 26 the court will hear a motion from Hoag to expedite it.
While its financial fortitude distinguishes it from many other community hospitals tied to larger partners, Hoag’s experience with Providence is hardly uncommon amid widespread consolidation in the hospital industry and the growing influence of Catholic health care in the U.S.
“The bigger your parent organization becomes, the smaller your voice is within the system, and that’s part of what Hoag has been complaining about,” said Lois Uttley, director of the women’s health program at Community Catalyst, a Boston-based patient advocacy group that monitors hospital mergers.
“Compounding the problem is the fact that the system in this case is Catholic-run, because then, in addition to having an out-of-town system headquarters calling the shots, you also have to contend with governance from Catholic bishops,” Uttley said. “So you have two bosses, in a sense.”
Hoag is not the only hospital seeking to flee this dynamic. Last year, for example, Virginia Mason Memorial hospital in Yakima, Washington, said it would separate from its parent, Seattle-based Virginia Mason Health System, to avoid a pending merger with CHI Franciscan, part of the Catholic hospital giant CommonSpirit Health.
Mergers and acquisitions have led to the increasing dominance of mega hospital chains in U.S. health care over the past several years. From 2013 to 2018, the revenue of the 10 largest health systems grew 82%, compared with 45% for all other hospital groups, according to a recent study by Deloitte, the consulting and auditing firm.
Researchers expect the trend to accelerate as large health systems swallow smaller facilities economically weakened by the pandemic, and a growing trend toward outpatient care reduces demand for hospital beds.
Four of the 10 largest U.S. hospital systems are Catholic, including Chicago-based CommonSpirit Health, St. Louis-based Ascension, Livonia, Michigan-based Trinity Health and Providence. A study by Community Catalyst found that 1 in 6 acute care hospital beds are in Catholic facilities, and that 52 hospitals operating under Catholic restrictions were the sole acute care facilities in their regions last year, up from 30 in 2013.
“We need to make this a national conversation,” said Dr. Jeffrey Illeck, a Hoag OB-GYN.
He was among a group of Hoag OB-GYNs who signed a letter to then-California Attorney General Xavier Becerra in October, alleging that Providence frequently declined to authorize contraceptive treatments, such as intrauterine devices and tubal ligations — in breach of the conditions imposed by Becerra’s predecessor, Kamala Harris, when she approved the original affiliation with St. Joseph in 2013.
Wexler said he is confident the attorney general’s probe will provide “clarity that Providence has done nothing wrong.”
A particularly bitter disagreement between the two sides concerns a rupture last year within St. Joseph Heritage Healthcare, a physician group belonging to Providence that included both St. Joseph and Hoag doctors. In November, the group notified thousands of patients that their Hoag specialists were no longer part of the network and that they needed to choose new doctors.
Wexler said that was the inevitable result of a decision by the Hoag physicians to negotiate separate HMO contracts, an assertion Braithwaite contested. The move disrupted patient care just as the winter covid surge was gaining momentum, he said.
Perhaps the biggest frustration for most Hoag administrators and physicians is Providence’s desire to standardize care across all 51 hospitals through their shared Epic electronic records system.
Hoag doctors say Providence controls the contents of the Epic system and that the care protocols in it, often driven by cost considerations, frequently collide with their own clinical decisions. Any changes must be debated among all the hospitals in the system and adopted by consensus — a laborious undertaking.
Dr. Richard Haskell, a cardiologist at Hoag, recalled a dispute over intravenous Tylenol, which Hoag’s orthopedists prefer because they say it works well and furthered a concerted effort to reduce opioid addiction. Providence took IV Tylenol off its list of accepted drugs, and the Hoag orthopedists “were very upset,” Haskell said.
They eventually got it back on that list, but with the condition that they could order it only one dose at a time. That meant nurses had to call the doctor every four hours for a new order. “Doctors probably felt, ‘Screw it, I don’t want to get woken up every four hours,’ so they probably just gave them narcotics,’” Haskell said.
He said that before agreeing to adopt Providence’s Epic system, Hoag had received written assurances it could make changes that included its preferred treatment choices for various conditions. But it quickly became clear that was not going to happen, he said.
“We couldn’t make any changes at all, so we were stuck with their system,” Haskell said. “I don’t want to be in a system bogged down by bureaucracy that requires 51 hospitals to vote on it.”
Wexler said Hoag understood exactly what it had signed up for. “They knew full well that there would be a collaborative approach across all of Providence, including Hoag, to make decisions on what standardizations would happen across the entire system,” he said. “It is not easy if one hospital wants to create its own specific pathway.”
Despite Hoag’s concerns about lesser standards of care, Braithwaite could not cite an example of an adverse outcome that had resulted from it. And Hoag’s strong reputation seems untarnished, as reflected in the high rankings and awards it continues to garner — and tout on its website.
Still, the affiliation’s days seem numbered. Hoag is no longer on the Providence website or in its marketing materials, and in many cases — such as the St. Joseph Heritage schism — the two groups are already going their separate ways.
“They are certainly acting like we are competitors, and I assume that means they know the disaffiliation is imminent,” Braithwaite said.
Wexler, while reiterating that Providence wants to maintain the current arrangement, was nonetheless able to imagine a different outcome: “What we would do post-affiliation,” he said, “is to continue to look for opportunities to collaborate.”
As vaccine eligibility guidelines have expanded to include adults over 65, we’ve heard from several friends and acquaintances looking for the inside scoop on getting a place in line. They’ve heard that their local health system is taking appointments, but only for established patients—do we know someone at the local system who could help them (or their mother, or their aunt with Stage IV cancer) get the shot?
One acquaintance was livid that his local hospital was prioritizing established patients:“They’re just rewarding people who have already paid them money. Is that fair?” It’s likely that system was making decisions based not on prior business relationships, but rather logistics. If patients are already “in the system”, they can be contacted and scheduled through the patient portal, fill out information online, and have their doses tracked in the EMR.
As health systems have been thrust into leading frontline vaccine distribution some have recognized an unprecedented opportunity to earn loyalty by connecting current and potential patients with the vaccine.
Outreach must provide clear information around vaccine access and how eligibility decisions are made(consider the difference in message between “we’re offering vaccines to current patients only”, and “because established patients can be quickly scheduled and monitored, we are beginning with this group, and plan to expand quickly”).
Systems’ ultimate goal should be getting vaccines to as many people as possible, as fast as possible, given supply and resource constraints.
As the industry braces for the next phase of COVID-19, experts at Kaiser Permanente are sharing several key capabilities that will be critical to prepare for another potential surge.
In an article for NEJM Catalyst, leaders at the healthcare giant highlight eight focus areas health systems must consider as the country reopens and offer a look at how Kaiser Permanente tackled those challenges.
A critical starting point, they write, is a robust testing program that feeds into essential contact tracing and monitoring of any spikes in cases. As of May 18, Kaiser Permanente has performed more than 233,706 diagnostic tests and is also tracking the spread telephonically through its call centers as well as secure emails between patients and doctors.
The Oakland, California-based system is also mulling greater use of patient symptom surveying and harnessing data within electronic health records to further enhance the testing effort, according to the article.
Stephen Parodi, M.D., executive vice president at The Permanente Federation and Kaiser Permanente’s national infectious disease leader, told Fierce Healthcare that the goal of the paper is to spotlight how crucial it is to consider all fronts in preventing the spread of COVID-19.
“I think one of the biggest takeaways here is that we need a complete and comprehensive approach to suppress the virus,” Parodi, one of the report’s lead authors, said.
Bechara Choucair, M.D., senior vice president and chief health officer at Kaiser Permanente, is also one of the paper’s lead authors.
The other capabilities included in the report are:
Enhanced contact tracing and isolation efforts
Robust community health efforts
Home health care options
Ability to maintain surge capacity
Targeted and safe strategies to reopen
Ongoing research on the virus
Effective communication with patients
Parodi said two of the biggest challenges Kaiser Permanente faced in working through this checklist of capabilities were a lack of supplies and the need to work alongside other organizations.
He said that didn’t only mean strengthening and reinforcing existing relationships with community groups but also reaching out to other health systems and providers to coordinate plans and work together.
It also required coordination between officials and policymakers at all levels of government, he said.
“Having the leaders at individual medical centers working with the county level folks is really key to making sure that we’re aware of each other’s work and response, then actually syncing them together,” Parodi said.
Parodi also said that Kaiser Permanente went “wholesale” into using telehealth during the initial surge of COVID-19 cases, and now the system and its physicians will be working together to determine where virtual care is most appropriate and effective, as the interest in and growth of those services isn’t going away anytime soon.
He added that moving into the reopening phase poses its own set of challenges, because it’s an “unprecedented” situation to navigate.
Kaiser Permanente is aiming to center shared decision-making and patient education in the response to reopening, he said, while also providing guidance to support providers. That way, decisions are ultimately made by the doctor and patient, but they’re informed and guided decisions, he said.
“There is no set playbook for how to do it right,” Parodi said.
The biggest problem with electronic syndromic surveillance reporting isn’t that hospitals lack the capacity to send data — it’s that public health agencies lack the ability to receive it, according to a new report published in the Journal of the American Medical Informatics Association.
More than four in 10 U.S. hospitals say their local, state and federal public health agencies are unable to receive data electronically, reflecting a decade-long investment in health IT infrastructure on the private sector side without a concomitant investment from its federal partners, researchers found.
Hospitals in regions forecast to be some of the hardest hit from COVID-19 were more likely to say public health agencies were unable to receive health data electronically, implying areas of highest need were some of the least prepared to mount a coordinated, data-driven response going into the pandemic.
Effective pandemic response requires real-time, accurate data sharing between providers and public health agencies, allowing the government to track outbreaks and allocate resources as needed.
A lack of nationwide, interoperable reporting infrastructure has been one of the major criticisms of the Trump administration’s handling of the pandemic, which has infected almost 1.7 million and killed 99,000 people in the U.S. as of Wednesday.
CMS requires hospitals be able to electronically send and receive health information, including lab results and syndromic surveillance data, to and from public health agencies like their state’s department of health. For more than a decade, providers have funneled significant resources into their IT infrastructure due to a slurry of federal incentive programs, though EHR implementation remains piecemeal across the U.S. due to cost and other barriers.
The JAMIA study, one of the first looking at the state of health data reporting, analyzed 2018 American Hospital Association data to identify hospital-reported barriers to surveillance data reporting, and Harvard Global Health Institute data on the coronavirus pandemic’s projected impact on hospital capacity at the hospital referral region (HRR) level. Researchers assumed a 40% population infection rate over 12 months.
The group found 31 high-need HRRs, those in the top quartile of projected beds needed for COVID-19 patients, with more than half of the hospitals in the region saying the relevant public health agency couldn’t electronically receive data.
That suggests areas more likely to be overwhelmed by the pandemic had some of the least interoperable data-sharing capabilities going into it, hamstringing outbreak response.
Researchers found the most common barrier to data-sharing nationwide, reported by 41% of hospitals, was that public health agencies didn’t have the capacity to receive data electronically.
The next most common, reported by 32% of hospitals, was interface-related issues, such as costs or implementation complexity; followed by difficulty extracting data from the EHR (14% of hospitals reporting), different data standards (also 14%), hospitals lacking the capacity to send data (8%) and hospitals being unsure what public health agencies to send the data to (3%).
Researchers also found significant state variance in hospitals saying public health agencies couldn’t receive needed data electronically, running the gamut from 83% of hospitals saying so in Hawaii and Rhode Island to 40% in New Jersey and Virginia to none in Delaware.
Geographic variation is likely due to different funding priorities in different places, as some agencies may only be able to receive specific data elements or interface with a select number of EHRs. This spotty IT implementation results in a patchwork picture of disease progression across the U.S., though the Centers for Disease Control and Prevention is working to automate the COVID-19 reporting process.
The study does have some significant limitations. It’s a relatively one-sided portrayal of the issue, as researchers did not have access to data or survey results from public health agencies. And, since AHA survey results were from two years ago, the EHR landscape could have shifted since 2018.
However, researchers called upon policymakers to build up public health agencies’ IT capabilities, especially as states begin to reopen despite an increasingly likely resurgence of the virus in the fall.
“Policymakers should prioritize investment in public health IT infrastructure along with broader health system information technology for both long-term COVID-19 monitoring as well as future pandemic preparedness,” authors A Jay Holmgren, a doctoral candidate at Harvard Business School; Nate Apathy, a doctoral candidate at Indiana University’s Richard M. Fairbanks School of Public Health; and Julia Adler-Milstein, a professor at University of San Francisco Department of Medicine, wrote.
Tower Health reported higher revenue in the nine months ended March 31, but the West Reading, Pa.-based health system ended the period with an operating loss, according to recently released unaudited financial documents.
The health system reported revenue of $1.6 billion in the first three quarters of fiscal year 2020, up 13.8 percent from $1.4 billion a year earlier. Higher expenses and reductions in patient volume in the most recent quarter due to the COVID-19 pandemic hindered further revenue growth.
Tower Health said expenses climbed 19.2 percent year over year to $1.7 billion in the nine months ended March 31. In the first three quarters of fiscal 2020, the health system recorded $27.1 million in one-time expenses — $7 million related to Epic implementation costs at newly acquired hospitals, and the remainder was related to other one-time transaction costs.
The health system said it had 102 days cash on hand as of March 31, down 52 days from a year earlier. The decrease was primarily due to Epic implementation costs and integration expenses, Tower said.
The health system said it has made significant revenue cycle improvements and expects progress to continue.
“During the COVID-19 crisis revenue cycle management is aggressively pursuing advanced payments and outstanding claim resolution from all major commercial payors as well as ensuring capture of additional reimbursement for services such as telemedicine,” Tower said.
The health system ended the first three quarters of fiscal 2020 with an operating loss of $131.9 million, compared to an operating loss of $48.5 million in the same period a year earlier.
After factoring in nonoperating losses, Tower Health recorded a net loss of $153.9 million in the first three quarters of fiscal 2020. In the same period a year earlier, it reported a net loss of $29.3 million.
To offset financial damage from COVID-19, Tower Health has furloughed about 1,000 employees and received $166 million in advance Medicare payments, which must be repaid. The health system also received $66 million in grants under the Coronavirus Aid, Relief and Economic Security Act.