Oracle’s Acquisition of Cerner: The Future of Healthcare

https://blogs.oracle.com/healthcare/post/oracle-acquisition-of-cerner-the-future-of-healthcare

Prioritizing outcomes in healthcare is long overdue and now within reach following Oracle’s acquisition of Cerner. To achieve more seamless, coordinated care, technology must play a greater role in reframing solutions for health and well-being around the world.

Combining Cerner’s clinical capabilities with Oracle’s enterprise platform, analytics, and automation expertise will change health and wellness in a way that simply hasn’t been possible before. We’ll provide secure and reliable solutions that deliver health insights and experiences to dramatically change how health is managed by patients, providers, and payors. The industry has never been riper for change.

Designing for people

Healthcare is innately personal; however, the industry often loses sight of the human side of health as delivering and understanding care has become increasingly disconnected and complex. Research reveals that doctors spend nearly twice as much time on administrative work as they do engaging with patients. If we replaced clinicians’ time spent performing administrative tasks with patient interactions, imagine how dramatically we could improve quality of care. Technology-induced administrative burden contributes to burnout, which has, in part, resulted in a workforce shortage and overshadowed the true benefits of healthcare technology. Clinicians didn’t enter medicine to spend half of their time conducting routine tasks and completing required documentation; they chose their profession to practice at the top of their license. We’re working to make this a reality, providing a toolset that supports clinical decision making and prioritizes the user experience.

For care delivery organizations, we’ll develop new cloud-enabled capabilities allowing providers to access the information they need, where and when they need it, on an interface that is easy to use. This will significantly reduce the time and effort required to find a patient’s information, even if the information is scattered across different providers or care settings. We’ll help people access and manage their own health information from wherever they are, so that they have a stronger voice in their care and can conduct more meaningful conversations with their providers. When successful, these improvements ultimately increase the value of healthcare and have the additional benefit of contributing data to population health insights.

Collaborative, interoperable care

In a complex and inefficient healthcare industry, interoperability is critical; but, it hasn’t been widely adopted between organizations. From the patient perspective, data silos limit patients’ empowerment and involvement in their health and well-being. It is vitally important that medical records are portable. Regardless of where someone receives care, their records should be accessible and unified. From a clinical perspective, interoperability ensures clinicians can properly review a patient’s entire medical history within their workflow and provide appropriate, contextual treatment.

recent survey shows a staggering 97% of healthcare executives have called for increased healthcare data interoperability, the lack of which inhibits digital transformation and innovation within organizations and throughout the broader industry. Oracle is committed to open APIs to ensure any authorized user can consume health data and insights. We know a closed system will not create connectivity and unification across the many existing players and systems. Creating more solutions without an open ecosystem commitment would only contribute to the problems we see today with fractured and siloed systems.

Oracle will harness the power of data to create a collaborative ecosystem where people, patients, providers, and payors can securely access clinical, operational, and financial data on the cloud. These efforts will break down data silos and provide open systems that talk to—and connect with—one another to generate actionable, scalable, and global insights previously unavailable. Industry fragmentation impacts both patients and providers, but Oracle has the power to aggregate data into a single source of truth to achieve better outcomes.

Improved efficiency across the system

While enhanced clinical systems will improve experiences bedside and lead to better public health outcomes, back-office operations must also be improved to drive true efficiency, reduce costs, and make the business of healthcare more predictable. Oracle’s Fusion application suite can create this bridge between the bedside and the back-office, enhancing employee experience (better retention, less administration), streamlining the supply chain (reduced shrinkage, better inventory management), and giving the executive a better understanding of the issues impacting their business (greater predictability and cost control).

Secure healthcare data

Unfortunately, we know that retail, finance, and health data are the most targeted in security breaches. Patient privacy and the security of health data, when left unaddressed, threaten what the information of health exchange is solely meant to protect­­: patient safety. It’s time to raise health data security to an unprecedented level of investment and focus.

Oracle is an industry leader in securely storing, processing, and analyzing large volumes of cloud-based data. We’ll continue to apply the same security-obsessed focus to healthcare as we do to all industries, ­allowing people, patients, providers, and payors to safely access insights that improve care and advance decision-making. Oracle has been trusted with some of the world’s most sensitive and regulated data for more than 44 years. For the financial services industry specifically, Oracle already serves customers in more than 140 countries and manages risk for 24 of the world’s 28 systemically important financial institutions (SIFIs).

Meeting the moment

While we already knew this industry was ready for change, the pandemic amplified and accelerated the world’s readiness to see that change. We aim to meet this moment leveraging the technology and expertise that have revolutionized other industries, as well as applying new innovations to transform these systems of record into systems of intelligence.

Combining our existing healthcare industry solutions—from clinical trials to health insurance payor solutions to public health analysis systems—with our acquisition of Cerner, we believe Oracle has a uniquely positioned opportunity to offer new solutions to a broken healthcare system. We plan to support the entire lifecycle of healthcare, going beyond traditional health IT to integrate our infrastructure, platform, and applications capabilities for a more fully connected operational, administrative, and clinical system. 

We are fully committed to the partnerships that will be instrumental to this journey. The technology and the world are ready for transformation. This is just the beginning.

Permanent expansions of government data collection will support policy innovation

We rarely see the impact of policies reflected in data in real time. The COVID-19 pandemic changed that. In the present moment, a range of government, private, and academic sources catalogue household-level health and economic information to enable rapid policy analysis and response. To continue promoting periodic findings, identifying vulnerable populations, and maintaining a focus on public health, frequent national data collection needs to be improved and expanded permanently.

Knowledge accumulates over time, facilitating new advancements and advocacy. While mRNA biotechnology was not usable decades ago, years of public research helped unlock highly effective COVID-19 vaccines. The same can be true for advancing effective socioeconomic policies. More national, standardized data like the Census Bureau’s Household Pulse Survey will accelerate progress. At the same time, there are significant issues with national data sources. For instance, COVID-19 data reported by the CDC faced notable quality issues and inconsistencies between states.

Policymakers can’t address problems that they don’t know exist. Researchers can’t identify problems and solutions without adequate data. We can better study how policies impact population health and inform legislative action with greater federal funding dedicated to wide-ranging, systematized population surveys.

Broader data collection enables more findings and policy development

Evidence-based research is at the core of effective policy action. Surveillance data indicates what problems families face, who is most affected, and which interventions can best promote health and economic well-being. These collections can inform policy responses by reporting information on the demographics disproportionately affected by socioeconomic disruptions. Race and ethnicity, age, gender, sexual orientation, household composition, and work occupation all provide valuable details on who has been left behind by past and present legislative choices.

Since March 2020, COVID-19 cases and deaths, changes in employment, and food and housing security have been tracked periodically with detailed demographic information through surveys like the   Both cumulative statistical compilations and representative surveillance polling have been instrumental to analyses.  Our team has recorded over 200 state-level policies in the COVID-19 US State Policy (CUSP) database to further research and journalistic investigations. We have learned a number of policy lessons, from the health protections of eviction moratoria to the food security benefits of social insurance expansions. Not to be forgotten is the importance of documented evidence to these insights.

Without this comprehensive tracking, it would be difficult to determine the number of evictions occurring despite active moratoria, what factors contribute to elevated risk of COVID-19, and the value of pandemic unemployment insurance programs in states. The wider number of direct and indirect health outcomes measured have bolstered our understanding of the suffering experienced by different demographic groups. These issues are receiving legislative attention, in no small part due to the broad statistical collection and subsequent analytical research on these topics.

Insufficient data results in inadequate understanding of policy issues

The more high-quality data there is, the better. With the state-level policies present in CUSP, our team and other research groups quantified the impact of larger unemployment insurance benefit sizes, greater minimum wages, mask mandates, and eviction freezes. These analyses have been utilized by state and federal officials. None would have been possible without increased data collection.

However, our policy investigations are constrained by the data availability and quality on state and federal government websites, which may be improved with stimulus funds allocated to modernize our public health data infrastructure. Some of the most consequential decision-making right now relates to vaccine distribution and administration, but it is difficult to disaggregate state-level statistics. Many states lack demographic information on vaccine recipients as well as those that have contracted or died from COVID-19. Even though racial disparities are present in COVID-19 cases, hospitalizations, and deaths nationally, we can’t always determine the extent of these inequities locally. These present issues are a microcosm of pre-existing problems.

Data shortcomings present for years, in areas like occupational safety, are finally being spotlighted due to the pandemic. Minimal national and state workplace health data translated  to insufficient COVID-19 surveillance in workplace settings. Studies that show essential workers are facing elevated risk of COVID-19 are often limited in scope to individual states or cities, largely due to the lack of usable and accessible data. More investment is needed going forward beyond the pandemic to better document a   Otherwise there will continue to be serious blind spots in the ability to evaluate policy decisions, enforce better workplace standards, and hold leaders accountable for choices.

These are problems with a simple solution: collect more information. Now is not the time to eliminate valuable community surveys and aggregate compilations, but to expand on them. More comprehensive data will provide a spotlight on current and future legislative choices and improve the understanding of policies in new ways. It is our hope that   are built upon and become the new norm.

Disclosure: Funding received from Robert Wood Johnson Foundation was used to develop the COVID-19 US State Policy Database.

Deepening the role of Big Tech in analyzing clinical data

How Big Tech Is Changing the Way Hospitals Are Run | Technology Networks

HCA Healthcare, the nation’s largest for-profit hospital chain, which operates 185 hospitals and more than 2,000 care sites across 20 states, announced a landmark deal with search giant Google this week, aimed at extracting and analyzing data from more than 32M annual patient encounters.

The multiyear partnership will involve data scientists from both companies working together to develop care algorithms and clinical alerts to improve outcomes and efficiency. Data from HCA’s electronic health records will be integrated with Google’s cloud computing service, and the companies have pledged to adhere to strict limitations to protect individual patient privacy—a key concern raised by regulators after Google announced a similar partnership with another national health system, Ascension, at the end of 2019.

Despite those assurances, some experts pointed to this week’s announcement as further evidence that existing privacy protections are insufficient in the face of the deepening relationships between tech companies, like Google and Microsoft, and healthcare providers, who manage the sensitive health information of millions of patients.
 
We’d agree—we’re overdue for a major rethink of how patient privacy is handled. The healthcare industry spent much of the last decade “wiring” the health system, converting from paper records to electronic ones, and building vast storehouses of clinical data along the way. We’ve now reached a new phase, and the primary task ahead is to harness all of that data to actually improve care. That will require extensive data sharing, such as a recently announced initiative among several major health systems, and will also entail tapping the expertise of “big data” companies from beyond healthcare—the very same companies whose business practices have sometimes raised privacy concerns in the broader social context. But health information is different—more personal and more sensitive—than data about shopping preferences and viewing habits, requiring more rigorous regulation. 

As more big data deals are inked in healthcare, the question of patient privacy will become increasingly pressing.

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

In scramble to respond to Covid-19, hospitals turned to models with high  risk of bias - MedCity News

Of 26 health systems surveyed by MedCity News, nearly half used automated tools to respond to the Covid-19 pandemic, but none of them were regulated. Even as some hospitals continued using these algorithms, experts cautioned against their use in high-stakes decisions.

A year ago, Michigan Medicine faced a dire situation. In March of 2020, the health system predicted it would have three times as many patients as its 1,000-bed capacity — and that was the best-case scenario. Hospital leadership prepared for this grim prediction by opening a field hospital in a nearby indoor track facility, where patients could go if they were stable, but still needed hospital care. But they faced another predicament: How would they decide who to send there?

Two weeks before the field hospital was set to open, Michigan Medicine decided to use a risk model developed by Epic Systems to flag patients at risk of deterioration. Patients were given a score of 0 to 100, intended to help care teams determine if they might need an ICU bed in the near future. Although the model wasn’t developed specifically for Covid-19 patients, it was the best option available at the time, said Dr. Karandeep Singh, an assistant professor of learning health sciences at the University of Michigan and chair of Michigan Medicine’s clinical intelligence committee. But there was no peer-reviewed research to show how well it actually worked.

Researchers tested it on over 300 Covid-19 patients between March and May. They were looking for scores that would indicate when patients would need to go to the ICU, and if there was a point where patients almost certainly wouldn’t need intensive care.

“We did find a threshold where if you remained below that threshold, 90% of patients wouldn’t need to go to the ICU,” Singh said. “Is that enough to make a decision on? We didn’t think so.”

But if the number of patients were to far exceed the health system’s capacity, it would be helpful to have some way to assist with those decisions.

“It was something that we definitely thought about implementing if that day were to come,” he said in a February interview.

Thankfully, that day never came.

The survey
Michigan Medicine is one of 80 hospitals contacted by MedCity News between January and April in a survey of decision-support systems implemented during the pandemic. 
Of the 26 respondents, 12 used machine learning tools or automated decision systems as part of their pandemic response. Larger hospitals and academic medical centers used them more frequently.

Faced with scarcities in testing, masks, hospital beds and vaccines, several of the hospitals turned to models as they prepared for difficult decisions. The deterioration index created by Epic was one of the most widely implemented — more than 100 hospitals are currently using it — but in many cases, hospitals also formulated their own algorithms.

They built models to predict which patients were most likely to test positive when shortages of swabs and reagents backlogged tests early in the pandemic. Others developed risk-scoring tools to help determine who should be contacted first for monoclonal antibody treatment, or which Covid patients should be enrolled in at-home monitoring programs.

MedCity News also interviewed hospitals on their processes for evaluating software tools to ensure they are accurate and unbiased. Currently, the FDA does not require some clinical decision-support systems to be cleared as medical devices, leaving the developers of these tools and the hospitals that implement them responsible for vetting them.

Among the hospitals that published efficacy data, some of the models were only evaluated through retrospective studies. This can pose a challenge in figuring out how clinicians actually use them in practice, and how well they work in real time. And while some of the hospitals tested whether the models were accurate across different groups of patients — such as people of a certain race, gender or location — this practice wasn’t universal.

As more companies spin up these models, researchers cautioned that they need to be designed and implemented carefully, to ensure they don’t yield biased results.

An ongoing review of more than 200 Covid-19 risk-prediction models found that the majority had a high risk of bias, meaning the data they were trained on might not represent the real world.

“It’s that very careful and non-trivial process of defining exactly what we want the algorithm to be doing,” said Ziad Obermeyer, an associate professor of health policy and management at UC Berkeley who studies machine learning in healthcare. “I think an optimistic view is that the pandemic functions as a wakeup call for us to be a lot more careful in all of the ways we’ve talked about with how we build algorithms, how we evaluate them, and what we want them to do.”

Algorithms can’t be a proxy for tough decisions
Concerns about bias are not new to healthcare. In a paper published two years ago
, Obermeyer found a tool used by several hospitals to prioritize high-risk patients for additional care resources was biased against Black patients. By equating patients’ health needs with the cost of care, the developers built an algorithm that yielded discriminatory results.

More recently, a rule-based system developed by Stanford Medicine to determine who would get the Covid-19 vaccine first ended up prioritizing administrators and doctors who were seeing patients remotely, leaving out most of its 1,300 residents who had been working on the front lines. After an uproar, the university attributed the errors to a “complex algorithm,” though there was no machine learning involved.

Both examples highlight the importance of thinking through what exactly a model is designed to do — and not using them as a proxy to avoid the hard questions.

“The Stanford thing was another example of, we wanted the algorithm to do A, but we told it to do B. I think many health systems are doing something similar,” Obermeyer said. “You want to give the vaccine first to people who need it the most — how do we measure that?”

The urgency that the pandemic created was a complicating factor.  With little information and few proven systems to work with in the beginning, health systems began throwing ideas at the wall to see what works. One expert questioned whether people might be abdicating some responsibility to these tools.

“Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted,” said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview. “Tough decisions are going to be made, I don’t think there are any doubts about that. But what are those tough decisions? We don’t actually name what constraints we’re hitting up against.”

The wild, wild west
There currently is no gold standard for how hospitals should implement machine learning tools, and little regulatory oversight for models designed to support physicians’ decisions, resulting in an environment that Mathur described as the “wild, wild west.”

How these systems were used varied significantly from hospital to hospital.

Early in the pandemic, Cleveland Clinic used a model to predict which patients were most likely to test positive for the virus as tests were limited. Researchers developed it using health record data from more than 11,000 patients in Ohio and Florida, including 818 who tested positive for Covid-19. Later, they created a similar risk calculator to determine which patients were most likely to be hospitalized for Covid-19, which was used to prioritize which patients would be contacted daily as part of an at-home monitoring program.

Initially, anyone who tested positive for Covid-19 could enroll in this program, but as cases began to tick up, “you could see how quickly the nurses and care managers who were running this program were overwhelmed,” said Dr. Lara Jehi, Chief Research Information Officer at Cleveland Clinic. “When you had thousands of patients who tested positive, how could you contact all of them?”

While the tool included dozens of factors, such as a patient’s age, sex, BMI, zip code, and whether they smoked or got their flu shot, it’s also worth noting that demographic information significantly changed the results. For example, a patient’s race “far outweighs” any medical comorbidity when used by the tool to estimate hospitalization risk, according to a paper published in Plos One.  Cleveland Clinic recently made the model available to other health systems.

Others, like Stanford Health Care and 731-bed Santa Clara County Medical Center, started using Epic’s clinical deterioration index before developing their own Covid-specific risk models. At one point, Stanford developed its own risk-scoring tool, which was built using past data from other patients who had similar respiratory diseases, such as the flu, pneumonia, or acute respiratory distress syndrome. It was designed to predict which patients would need ventilation within two days, and someone’s risk of dying from the disease at the time of admission.

Stanford tested the model to see how it worked on retrospective data from 159 patients that were hospitalized with Covid-19, and cross-validated it with Salt Lake City-based Intermountain Healthcare, a process that took several months. Although this gave some additional assurance — Salt Lake City and Palo Alto have very different populations, smoking rates and demographics — it still wasn’t representative of some patient groups across the U.S.

“Ideally, what we would want to do is run the model specifically on different populations, like on African Americans or Hispanics and see how it performs to ensure it’s performing the same for different groups,” Tina Hernandez-Boussard, an associate professor of medicine, biomedical data science and surgery at Stanford, said in a February interview. “That’s something we’re actively seeking. Our numbers are still a little low to do that right now.”

Stanford planned to implement the model earlier this year, but ultimately tabled it as Covid-19 cases fell.

‘The target is moving so rapidly’
Although large medical centers were more likely to have implemented automated systems, there were a few notable holdouts. For example, UC San Francisco Health, Duke Health and Dignity Health all said they opted not to use risk-prediction models or other machine learning tools in their pandemic responses.

“It’s pretty wild out there and I’ll be honest with you —  the dynamics are changing so rapidly,” said Dr. Erich Huang, chief officer for data quality at Duke Health and director of Duke Forge. “You might have a model that makes sense for the conditions of last month but do they make sense for the conditions of next month?”

That’s especially true as new variants spread across the U.S., and more adults are vaccinated, changing the nature and pace of the disease. But other, less obvious factors might also affect the data. For instance, Huang pointed to big differences in social mobility across the state of North Carolina, and whether people complied with local restrictions. Differing social and demographic factors across communities, such as where people work and whether they have health insurance, can also affect how a model performs.

“There are so many different axes of variability, I’d feel hard pressed to be comfortable using machine learning or AI at this point in time,” he said. “We need to be careful and understand the stakes of what we’re doing, especially in healthcare.”

Leadership at one of the largest public hospitals in the U.S., 600-bed LAC+USC Medical Center in Los Angeles, also steered away from using predictive models, even as it faced an alarming surge in cases over the winter months.

At most, the hospital used alerts to remind physicians to wear protective equipment when a patient has tested positive for Covid-19.

“My impression is that the industry is not anywhere near ready to deploy fully automated stuff just because of the risks involved,” said Dr. Phillip Gruber, LAC+USC’s chief medical information officer. “Our institution and a lot of institutions in our region are still focused on core competencies. We have to be good stewards of taxpayer dollars.”

When the data itself is biased
Developers have to contend with the fact that any model developed in healthcare will be biased, because the data itself is biased; how people access and interact with health systems in the U.S. is fundamentally unequal.

How that information is recorded in electronic health record systems (EHR) can also be a source of bias, NYU’s Mathur said. People don’t always self-report their race or ethnicity in a way that fits neatly within the parameters of an EHR. Not everyone trusts health systems, and many people struggle to even access care in the first place.

“Demographic variables are not going to be sharply nuanced. Even if they are… in my opinion, they’re not clean enough or good enough to be nuanced into a model,” Mathur said.

The information hospitals have had to work with during the pandemic is particularly messy. Differences in testing access and missing demographic data also affect how resources are distributed and other responses to the pandemic.

“It’s very striking because everything we know about the pandemic is viewed through the lens of number of cases or number of deaths,” UC Berkeley’s Obermeyer said. “But all of that depends on access to testing.”

At the hospital level, internal data wouldn’t be enough to truly follow whether an algorithm to predict adverse events from Covid-19 was actually working. Developers would have to look at social security data on mortality, or whether the patient went to another hospital, to track down what happened.

“What about the people a physician sends home —  if they die and don’t come back?” he said.

Researchers at Mount Sinai Health System tested a machine learning tool to predict critical events in Covid-19 patients —  such as dialysis, intubation or ICU admission — to ensure it worked across different patient demographics. But they still ran into their own limitations, even though the New York-based hospital system serves a diverse group of patients.

They tested how the model performed across Mount Sinai’s different hospitals. In some cases, when the model wasn’t very robust, it yielded different results, said Benjamin Glicksberg, an assistant professor of genetics and genomic sciences at Mount Sinai and a member of its Hasso Plattner Institute for Digital Health.

They also tested how it worked in different subgroups of patients to ensure it didn’t perform disproportionately better for patients from one demographic.

“If there’s a bias in the data going in, there’s almost certainly going to be a bias in the data coming out of it,” he said in a Zoom interview. “Unfortunately, I think it’s going to be a matter of having more information that can approximate these external factors that may drive these discrepancies. A lot of that is social determinants of health, which are not captured well in the EHR. That’s going to be critical for how we assess model fairness.”

Even after checking for whether a model yields fair and accurate results, the work isn’t done yet. Hospitals must continue to validate continuously to ensure they’re still working as intended — especially in a situation as fast-moving as a pandemic.

A bigger role for regulators
All of this is stirring up a broader discussion about how much of a role regulators should have in how decision-support systems are implemented.

Currently, the FDA does not require most software that provides diagnosis or treatment recommendations to clinicians to be regulated as a medical device. Even software tools that have been cleared by the agency lack critical information on how they perform across different patient demographics. 

Of the hospitals surveyed by MedCity News, none of the models they developed had been cleared by the FDA, and most of the external tools they implemented also hadn’t gone through any regulatory review.

In January, the FDA shared an action plan for regulating AI as a medical device. Although most of the concrete plans were around how to regulate algorithms that adapt over time, the agency also indicated it was thinking about best practices, transparency, and methods to evaluate algorithms for bias and robustness.

More recently, the Federal Trade Commission warned that it could crack down on AI bias, citing a paper that AI could worsen existing healthcare disparities if bias is not addressed.

“My experience suggests that most models are put into practice with very little evidence of their effects on outcomes because they are presumed to work, or at least to be more efficient than other decision-making processes,” Kellie Owens, a researcher for Data & Society, a nonprofit that studies the social implications of technology, wrote in an email. “I think we still need to develop better ways to conduct algorithmic risk assessments in medicine. I’d like to see the FDA take a much larger role in regulating AI and machine learning models before their implementation.”

Developers should also ask themselves if the communities they’re serving have a say in how the system is built, or whether it is needed in the first place. The majority of hospitals surveyed did not share with patients if a model was used in their care or involve patients in the development process.

In some cases, the best option might be the simplest one: don’t build.

In the meantime, hospitals are left to sift through existing published data, preprints and vendor promises to decide on the best option. To date, Michigan Medicine’s paper is still the only one that has been published on Epic’s Deterioration Index.

Care teams there used Epic’s score as a support tool for its rapid response teams to check in on patients. But the health system was also looking at other options.

“The short game was that we had to go with the score we had,” Singh said. “The longer game was, Epic’s deterioration index is proprietary. That raises questions about what is in it.”

The High Price of Lowering Health Costs for 150 Million Americans

Image result for The High Price of Lowering Health Costs for 150 Million Americans

https://one.npr.org/?sharedMediaId=968920752:968920754

The Problem

Employers — including companies, state governments and universities — purchase health care on behalf of roughly 150 million Americans. The cost of that care has continued to climb for both businesses and their workers.

For many years, employers saw wasteful care as the primary driver of their rising costs. They made benefits changes like adding wellness programs and raising deductibles to reduce unnecessary care, but costs continued to rise. Now, driven by a combination of new research and changing market forces — especially hospital consolidation — more employers see prices as their primary problem.

The Evidence

The prices employers pay hospitals have risen rapidly over the last decade. Those hospitals provide inpatient care and increasingly, as a result of consolidation, outpatient care too. Together, inpatient and outpatient care account for roughly two-thirds of employers’ total spending per employee.

By amassing and analyzing employers’ claims data in innovative ways, academics and researchers at organizations like the Health Care Cost Institute (HCCI) and RAND have helped illuminate for employers two key truths about the hospital-based health care they purchase:

1) PRICES VARY WIDELY FOR THE SAME SERVICES

Data show that providers charge private payers very different prices for the exact same services — even within the same geographic area.

For example, HCCI found the price of a C-section delivery in the San Francisco Bay Area varies between hospitals by as much as:$24,107

Research also shows that facilities with higher prices do not necessarily provide higher quality care. 

2) HOSPITALS CHARGE PRIVATE PAYERS MORE

Data show that hospitals charge employers and private insurers, on average, roughly twice what they charge Medicare for the exact same services. A recent RAND study analyzed more than 3,000 hospitals’ prices and found the most expensive facility in the country charged employers:4.1xMedicare

Hospitals claim this price difference is necessary because public payers like Medicare do not pay enough. However, there is a wide gap between the amount hospitals lose on Medicare (around -9% for inpatient care) and the amount more they charge employers compared to Medicare (200% or more).

Employer Efforts

A small but growing group of companies, public employers (like state governments and universities) and unions is using new data and tactics to tackle these high prices. (Learn more about who’s leading this work, how and why by listening to our full podcast episode in the player above.)

Note that the employers leading this charge tend to be large and self-funded, meaning they shoulder the risk for the insurance they provide employees, giving them extra flexibility and motivation to purchase health care differently. The approaches they are taking include:


Steering Employees

Some employers are implementing so-called tiered networks, where employees pay more if they want to continue seeing certain, more expensive providers. Others are trying to strongly steer employees to particular hospitals, sometimes know as centers of excellence, where employers have made special deals for particular services.

Purdue University, for example, covers travel and lodging and offers a $500 stipend to employees that get hip or knee replacements done at one Indiana hospital.

Negotiating New Deals

There is a movement among some employers to renegotiate hospital deals using Medicare rates as the baseline — since they are transparent and account for hospitals’ unique attributes like location and patient mix — as opposed to negotiating down from charges set by hospitals, which are seen by many as opaque and arbitrary. Other employers are pressuring their insurance carriers to renegotiate the contracts they have with hospitals.

In 2016, the Montana state employee health planled by Marilyn Bartlett, got all of the state’s hospitals to agree to a payment rate based on a multiple of Medicare. They saved more than $30 million in just three years. Bartlett is now advising other states trying to follow her playbook.

In 2020, several large Indiana employers urged insurance carrier Anthem to renegotiate their contract with Parkview Health, a hospital system RAND researchers identified as one of the most expensive in the country. After months of tense back-and-forth, the pair reached a five-year deal expected to save Anthem customers $700 million.

Legislating, Regulating, Litigating

Some employer coalitions are advocating for more intervention by policymakers to cap health care prices or at least make them more transparent. States like Colorado and Indiana have passed price transparency legislation, and new federal rules now require more hospital price transparency on a national level. Advocates expect strong industry opposition to stiffer measures, like price caps, which recently failed in the Montana legislature. 

Other advocates are calling for more scrutiny by state and federal officials of hospital mergers and other anticompetitive practices. Some employers and unions have even resorted to suing hospitals like Sutter Health in California.

Employer Challenges

Employers face a few key barriers to purchasing health care in different and more efficient ways:

Provider Power

Hospitals tend to have much more market power than individual employers, and that power has grown in recent years, enabling them to raise prices. Even very large employers have geographically dispersed workforces, making it hard to exert much leverage over any given hospital. Some employers have tried forming purchasing coalitions to pool their buying power, but they face tricky organizational dynamics and laws that prohibit collusion.

Sophistication

Employers can attempt to lower prices by renegotiating contracts with hospitals or tailoring provider networks, but the work is complicated and rife with tradeoffs. Few employers are sophisticated enough, for example, to assess a provider’s quality or to structure hospital payments in new ways. Employers looking for insurers to help them have limited options, as that industry has also become highly consolidated.

Employee Blowback

Employers say they primarily provide benefits to recruit and retain happy and healthy employees. Many are reluctant to risk upsetting employees by cutting out expensive providers or redesigning benefits in other ways.recent KFF survey found just 4% of employers had dropped a hospital in order to cut costs.

The Tradeoffs

Employers play a unique role in the United States health care system, and in the lives of the 150 million Americans who get insurance through work. For years, critics have questioned the wisdom of an employer-based health care system, and massive job losses created by the pandemic have reinforced those doubts for many.

Assuming employers do continue to purchase insurance on behalf of millions of Americans, though, focusing on lowering the prices they pay is one promising path to lowering total costs. However, as noted above, hospitals have expressed concern over the financial pressures they may face under these new deals. Complex benefit design strategies, like narrow or tiered networks, also run the risk of harming employees, who may make suboptimal choices or experience cost surprises. Finally, these strategies do not necessarily address other drivers of high costs including drug prices and wasteful care. 

Large health systems band together on monetize clinical data

https://mailchi.mp/41540f595c92/the-weekly-gist-february-12-2021?e=d1e747d2d8

Image result for monetizing clinical data
Fourteen of the nation’s largest health systems announced this week that they have joined together to form a new, for-profit data company aimed at aggregating and mining their clinical data. Called Truveta, the company will draw on the de-identified health records of millions of patients from thousands of care sites across 40 states, allowing researchers, physicians, biopharma companies, and others to draw insights aimed at “improving the lives of those they serve.” 

Health system participants include the multi-state Catholic systems CommonSpirit Health, Trinity Health, Providence, and Bon Secours Mercy, the for-profit system Tenet Healthcare, and a number of regional systems. The new company will be led by former Microsoft executive Terry Myerson, who has been working on the project since March of last year. As large technology companies like Amazon and Google continue to build out healthcare offerings, and national insurers like UnitedHealth Group and Aetna continue to grow their analytical capabilities based on physician, hospital, and pharmacy encounters, it’s surprising that hospital systems are only now mobilizing in a concerted way to monetize the clinical data they generate.

Like Civica, an earlier health system collaboration around pharmaceutical manufacturing, Truveta’s launch signals that large national and regional systems are waking up to the value of scale they’ve amassed over time, moving beyond pricing leverage to capture other benefits from the size of their clinical operations—and exploring non-merger partnerships to create value from collaboration. There will inevitably be questions about how patient data is used by Truveta and its eventual customers, but we believe the venture holds real promise for harnessing the power of massive clinical datasets to drive improvement in how care is delivered.

Colchicine for Early COVID-19? Trial May Support Oral Therapy at Home

But some find science-by-press-release troubling.

Anti-inflammatory oral drug colchicine improved COVID-19 outcomes for patients with relatively mild cases, according to certain topline results from the COLCORONA trial announced in a brief press release.

Overall, the drug used for gout and rheumatic diseases reduced risk of death or hospitalizations by 21% versus placebo, which “approached statistical significance.”

However, there was a significant effect among the 4,159 of 4,488 patients who had their diagnosis of COVID-19 confirmed by a positive PCR test:

  • 25% fewer hospitalizations
  • 50% less need for mechanical ventilation
  • 44% fewer deaths

If full data confirm the topline claims — the press release offered no other details, and did not mention plans for publication or conference presentation — colchicine would become the first oral drug proven to benefit non-hospitalized patients with COVID-19.

“Our research shows the efficacy of colchicine treatment in preventing the ‘cytokine storm’ phenomenon and reducing the complications associated with COVID-19,” principal investigator Jean-Claude Tardif, MD, of the Montreal Heart Institute, said in the press release. He predicted its use “could have a significant impact on public health and potentially prevent COVID-19 complications for millions of patients.”

Currently, the “tiny list of outpatient therapies that work” for COVID-19 includes convalescent plasma and monoclonal antibodies, which “are logistically challenging (require infusions, must be started very early after symptom onset),” tweeted Ilan Schwartz, MD, PhD, an infectious diseases researcher at the University of Alberta in Edmonton.

The COLCORONA findings were “very encouraging,” tweeted Martin Landray, MB ChB, PhD, of the Big Data Institute at the University of Oxford in England. His group’s RECOVERY trial has already randomized more than 6,500 hospitalized patients to colchicine versus usual care as one of the arms of the platform trial, though he did not offer any findings from that study.

“Different stage of disease so remains an important question,” he tweeted. “Maybe old drugs can learn new tricks!” Landray added, pointing to dexamethasone.

A small open-label, randomized trial from Greece had also shown less clinical status deterioration in hospitalized patients on colchicine.

“I think this is an exciting time. Many groups have been pursuing lots of different questions related to COVID and its complications,” commented Richard Kovacs, MD, immediate past-president of the American College of Cardiology. “We’re now beginning to see the fruit of those studies.”

The COLCORONA announcement came late Friday, following closely on the heels of the topline results from the ACTIVE-4a, REMAP-CAP, and ATTACC trials showing a significant morbidity and mortality advantage to therapeutic-dose anticoagulation in non-ICU patients in the hospital for COVID-19.

COLCORONA was conducted remotely, without in-person contact, with participants across Canada, the U.S., Europe, South America, and South Africa. It randomized participants double-blind to colchicine 0.5 mg or a matching placebo twice daily for the first 3 days and then once daily for the last 27 days.

Participants were ages 40 and older, not hospitalized at the time of enrollment, and had at least one risk factor for COVID-19 complications: age 70-plus, obesity, diabetes, uncontrolled hypertension, known asthma or chronic obstructive pulmonary disease, known heart failure, known coronary disease, fever of ≥38.4°C (101.12°F) within the last 48 hours, dyspnea at presentation, or certain blood cell abnormalities.

It had been planned as a 6,000-patient trial, but whether it was stopped for efficacy at a preplanned interim analysis or for some other reason was not spelled out in the press release. Whether the PCR-positive subgroup was preplanned also wasn’t clear. Key details such as confidence intervals, adverse effects, and subgroup results were omitted as well.

While a full manuscript is reportedly underway, “we don’t know enough to bring this into practice yet,” argued Kovacs.

The centuries-old drug has long been used for gout and arthritis and more recently for pericarditis along with showing promise in cardiovascular secondary prevention.

However, the drug isn’t as inexpensive in the U.S. as in Canada, Kovacs noted.

Some physicians also warned about the potential for misuse of the findings and attendant risks.

Dhruv Nayyar, MD, of the University of Toronto, tweeted that he has already had “patients inquiring why we are not starting colchicine for them. Science by press release puts us in a difficult position while providing care. I just want to see the data.”

Angela Rasmussen, MD, a virologist with the Georgetown Center for Global Health Science and Security’s Viral Emergence Research Initiative in Washington, agreed, tweeting: “When HCQ [hydroxychloroquine] was promoted without solid data, there was at least one death from an overdose. We don’t need people self-medicating with colchicine.”

As was the case with hydroxychloroquine before the papers proved little efficacy in COVID-19, Kovacs told MedPage Today: “We always get concerned when these drugs are repurposed that we might see an unintended run on the drug and lessen the supply.”

Citing the well-known diarrheal side effect of colchicine, infectious diseases specialist Edsel Salvana, MD, of the University of Pittsburgh and University of the Philippines in Manila, tweeted a plea for use only in the trial-proven patient population with confirmed COVID-19 — not prophylaxis.

The dose used was on par with that used in cardiovascular prevention and other indications, so the diarrhea incidence would probably follow the roughly 10% rate seen in the COLCOT trial, Kovacs suggested.

In the clinic, too, there are some cautions. As Elin Roddy, MD, a respiratory physician at Shrewsbury and Telford Hospital NHS Trust in England, tweeted: “Lots of drug interactions with colchicine potentially — statins, macrolides, diltiazem — we have literally been running up to the ward to cross off clarithromycin if RECOVERY randomises to colchicine.”

Florida’s COVID Response Includes Missing Deadlines and Data

Blog | Florida's COVID-19 Data: What We Know, What's Wrong, and What's  Missing | The COVID Tracking Project

 Since the beginning of the coronavirus pandemic, Florida has blocked, obscured, delayed, and at times hidden the COVID-19 data used in making big decisions such as reopening schools and businesses.

And with scientists warning Thanksgiving gatherings could cause an explosion of infections, the shortcomings in the state’s viral reporting have yet to be fixed.

While the state has put out an enormous amount of information, some of its actions have raised concerns among researchers that state officials are being less than transparent.

It started even before the pandemic became a daily concern for millions of residents. Nearly 175 patients tested positive for the disease in January and February, evidence the Florida Department of Health collected but never acknowledged or explained. The state fired its nationally praised chief data manager, she says in a whistleblower lawsuit, after she refused to manipulate data to support premature reopening. The state said she was fired for not following orders.

The health department used to publish coronavirus statistics twice a day before changing to once a day, consistently meeting an 11 a.m. daily deadline for releasing new information that scientists, the media and the public could use to follow the pandemic’s latest twists.

But in the past month the department has routinely and inexplicably failed to meet its own deadline by as much as six hours. On one day in October, it published no update at all.

News outlets were forced to sue the state before it would publish information identifying the number of infections and deaths at individual nursing homes.

Throughout it all, the state has kept up with the rapidly spreading virus by publishing daily updates of the numbers of cases, deaths and hospitalizations.

“Florida makes a lot of data available that is a lot of use in tracking the pandemic,” University of South Florida epidemiologist Jason Salemi said. “They’re one of the only states, if not the only state, that releases daily case line data (showing age, sex and county for each infected person).”

Dr. Terry Adirim, chairwoman of Florida Atlantic University’s Department of Integrated Biomedical Science, agreed, to a point.

“The good side is they do have daily spreadsheets,” Adirim said. “However, it’s the data that they want to put out.”

The state leaves out crucial information that could help the public better understand who the virus is hurting and where it is spreading, Adirim said.

The department, under state Surgeon General Dr. Scott Rivkees, oversees 53? health agencies covering Florida’s 67 counties, such as the one in Palm Beach County headed by Dr. Alina Alonso.

Rivkees was appointed in April 2019. He reports to Gov. Ron DeSantis, a Republican who has supported President Donald Trump’s approach to fighting the coronavirus and pressured local officials to reopen schools and businesses despite a series of spikes indicating rapid spread of the disease.

At several points, the DeSantis administration muzzled local health directors, such as when it told them not to advise school boards on reopening campuses.

DOH Knew Virus Here Since January

The health department’s own coronavirus reports indicated that the pathogen had been infecting Floridians since January, yet health officials never informed the public about it and they did not publicly acknowledge it even after The Palm Beach Post first reported it in May.

In fact, the night before The Post broke the story, the department inexplicably removed from public view the state’s dataset that provided the evidence. Mixed among listings of thousands of cases was evidence that up to 171 people ages 4 to 91 had tested positive for COVID-19 in the months before officials announced in March the disease’s presence in the state.

Were the media reports on the meaning of those 171 cases in error? The state has never said.

No Testing Stats Initially

When positive tests were finally acknowledged in March, all tests had to be confirmed by federal health officials. But Florida health officials refused to even acknowledge how many people in each county had been tested.

State health officials and DeSantis claimed they had to withhold the information to protect patient privacy, but they provided no evidence that stating the number of people tested would reveal personal information.

At the same time, the director of the Hillsborough County branch of the state health department publicly revealed that information to Hillsborough County commissioners.

And during March the state published on a website that wasn’t promoted to the public the ages and genders of those who had been confirmed to be carrying the disease, along with the counties where they claimed residence.

Firing Coronavirus Data Chief

In May, with the media asking about data that revealed the earlier onset of the disease, internal emails show that a department manager ordered the state’s coronavirus data chief to yank the information off the web, even though it had been online for months.

A health department tech supervisor told data manager Rebekah Jones on May 5 to take down the dataset. Jones replied in an email that was the “wrong call,” but complied, only to be ordered an hour later to put it back.

That day, she emailed reporters and researchers following a listserv she created, saying she had been removed from handling coronavirus data because she refused to manipulate datasets to justify DeSantis’ push to begin reopening businesses and public places.

Two weeks later, the health department fired Jones, who in March had created and maintained Florida’s one-stop coronavirus dashboard, which had been viewed by millions of people, and had been praised nationally, including by White House Coronavirus Task Force Coordinator Deborah Birx.

The dashboard allows viewers to explore the total number of coronavirus cases, deaths, tests and other information statewide and by county and across age groups and genders.

DeSantis claimed on May 21 that Jones wanted to upload bad coronavirus data to the state’s website. To further attempt to discredit her, he brought up stalking charges made against her by an ex-lover, stemming from a blog post she wrote, that led to two misdemeanor charges.

Using her technical know-how, Jones launched a competing COVID-19 dashboard website, FloridaCOVIDAction.com in early June. After national media covered Jones’ firing and website launch, people donated more than $200,000 to her through GoFundMe to help pay her bills and maintain the website.

People view her site more than 1 million times a day, she said. The website features the same type of data the state’s dashboard displays, but also includes information not present on the state’s site such as a listing of testing sites and their contact information.

Jones also helped launch TheCOVIDMonitor.com to collect reports of infections in schools across the country.

Jones filed a whistleblower complaint against the state in July, accusing managers of retaliating against her for refusing to change the data to make the coronavirus situation look better.

“The Florida Department of Health needs a data auditor not affiliated with the governor’s office because they cannot be trusted,” Jones said Friday.

Florida Hides Death Details

When coronavirus kills someone, their county’s medical examiner’s office logs their name, age, ethnicity and other information, and sends it to the Florida Department of Law Enforcement.

During March and April, the department refused requests to release that information to the public, even though medical examiners in Florida always have made it public under state law. Many county medical examiners, acknowledging the role that public information can play in combating a pandemic, released the information without dispute.

But it took legal pressure from news outlets, including The Post, before FDLE agreed to release the records it collected from local medical examiners.

When FDLE finally published the document on May 6, it blacked out or excluded crucial information such as each victim’s name or cause of death.

But FDLE’s attempt to obscure some of that information failed when, upon closer examination, the seemingly redacted details could in fact be read by common computer software.

Outlets such as Gannett, which owns The Post, and The New York Times, extracted the data invisible to the naked eye and reported in detail what the state redacted, such as the details on how each patient died.

Reluctantly Revealing Elder Care Deaths, Hospitalizations

It took a lawsuit against the state filed by the Miami Herald, joined by The Post and other news outlets, before the health department began publishing the names of long-term care facilities with the numbers of coronavirus cases and deaths.

The publication provided the only official source for family members to find out how many people had died of COVID-19 at the long-term care facility housing their loved ones.

While the state agreed to publish the information weekly, it has failed to publish several times and as of Nov. 24 had not updated the information since Nov. 6.

It took more pressure from Florida news outlets to pry from the state government the number of beds in each hospital being occupied by coronavirus patients, a key indicator of the disease’s spread, DeSantis said.

That was one issue where USF’s Salemi publicly criticized Florida.

“They were one of the last three states to release that information,” he said. “That to me is a problem because it is a key indicator.”

Confusion Over Positivity Rate

One metric DeSantis touted to justify his decision in May to begin reopening Florida’s economy was the so-called positivity rate, which is the share of tests reported each day with positive results.

But Florida’s daily figures contrasted sharply with calculations made by Johns Hopkins University, prompting a South Florida Sun-Sentinel examination that showed Florida’s methodology underestimated the positivity rate.

The state counts people who have tested positive only once, but counts every negative test a person receives until they test positive, so that there are many more negative tests for every positive one.

John Hopkins University, on the other hand, calculated Florida’s positivity rate by comparing the number of people testing positive with the total number of people who got tested for the first time.

By John Hopkins’ measure, between 10 and 11 percent of Florida’s tests in October came up positive, compared to the state’s reported rate of between 4 and 5 percent.

Health experts such as those at the World Health Organization have said a state’s positivity rate should stay below 5 percent for 14 days straight before it considers the virus under control and go forward with reopening public places and businesses. It’s also an important measure for travelers, who may be required to quarantine if they enter a state with a high positivity rate.

Withholding Detail on Race, Ethnicity

The Post reported in June that the share of tests taken by Black and Hispanic people and in majority minority ZIP codes were twice as likely to come back positive compared to tests conducted on white people and in majority white ZIP codes.

That was based on a Post analysis of internal state data the health department will not share with the public.

The state publishes bar charts showing general racial breakdowns but not for each infected person.

If it wanted to, Florida’s health department could publish detailed data that would shed light on the infection rates among each race and ethnicity or each age group, as well as which neighborhoods are seeing high rates of contagion.

Researchers have been trying to obtain this data but “the state won’t release the data without (making us) undergo an arduous data use agreement application process with no guarantee of release of the data,” Adirim said. Researchers must read and sign a 26-page, nearly 5,700-word agreement before getting a chance at seeing the raw data.

While Florida publishes the ages, genders and counties of residence for each infected person, “there’s no identification for race or ethnicity, no ZIP code or city of the residence of the patient,” Adirim said. “No line item count of negative test data so it’s hard to do your own calculation of test positivity.”

While Florida doesn’t explain its reasoning, one fear of releasing such information is the risk of identifying patients, particularly in tiny, non-diverse counties.

Confusion Over Lab Results

Florida’s daily report shows how many positive results come from each laboratory statewide. Except when it doesn’t.

The report has shown for months that 100 percent of COVID-19 tests conducted by some labs have come back positive despite those labs saying that shouldn’t be the case.

While the department reported in July that all 410 results from a Lee County lab were positive, a lab spokesman told The Post the lab had conducted roughly 30,000 tests. Other labs expressed the same confusion when informed of the state’s reporting.

The state health department said it would work with labs to fix the error. But even as recently as Tuesday, the state’s daily report showed positive result rates of 100 percent or just under it from some labs, comprising hundreds of tests.

Mistakenly Revealing School Infections

As DeSantis pushed in August for reopening schools and universities for students to attend in-person classes, Florida’s health department published a report showing hundreds of infections could be traced back to schools, before pulling that report from public view.

The health department claimed it published that data by mistake, the Miami Herald reported.

The report showed that COVID-19 had infected nearly 900 students and staffers.

The state resumed school infection reporting in September.

A similar publication of cases at day-care centers appeared online briefly in August only to come down permanently.

Updates Delayed

After shifting in late April to updating the public just once a day at 11 a.m. instead of twice daily, the state met that deadline on most days until it started to falter in October. Pandemic followers could rely on the predictability.

On Oct. 10, the state published no data at all, not informing the public of a problem until 5 p.m.

The state blamed a private lab for the failure but the next day retracted its statement after the private lab disputed the state’s explanation. No further explanation has been offered.

On Oct. 21, the report came out six hours late.

Since Nov. 3, the 11 a.m. deadline has never been met. Now, late afternoon releases have become the norm.

“They have gotten more sloppy and they have really dragged their feet,” Adirim, the FAU scientist, said.

No spokesperson for the health department has answered questions from The Post to explain the lengthy delays. Alberto Moscoso, the spokesman throughout the pandemic, departed without explanation Nov. 6.

The state’s tardiness can trip up researchers trying to track the pandemic in Florida, Adirim said, because if one misses a late-day update, the department could overwrite it with another update the next morning, eliminating critical information and damaging scientists’ analysis.

Hired Sports Blogger to Analyze Data

As if to show disregard for concerns raised by scientists, the DeSantis administration brought in a new data analyst who bragged online that he is no expert and doesn’t need to be.

Kyle Lamb, an Uber driver and sports blogger, sees his lack of experience as a plus.

“Fact is, I’m not an ‘expert’,” Lamb wrote on a website for a subscribers-only podcast he hosts about the coronavirus. “I also don’t need to be. Experts don’t have all the answers, and we’ve learned that the hard way throughout the entire duration of the global pandemic.”

Much of his coronavirus writings can be found on Twitter, where he has said masks and mandatory quarantines don’t stop the virus’ spread, and that hydroxychloroquine, a drug touted by President Donald Trump but rejected by medical researchers, treats it successfully.

While DeSantis says lockdowns aren’t effective in stopping the spread and refuses to enact a statewide mask mandate, scientists point out that quarantines and masks are extremely effective.

The U.S. Food and Drug Administration has said hydroxychloroquine is unlikely to help and poses greater risk to patients than any potential benefits.

Coronavirus researchers have called Lamb’s views “laughable,” and fellow sports bloggers have said he tends to act like he knows much about a subject in which he knows little, the Miami Herald reported.

DeSantis has yet to explain how and why Lamb was hired, nor has his office released Lamb’s application for the $40,000-a-year job. “We generally do not comment on such entry level hirings,” DeSantis spokesman Fred Piccolo said Tuesday by email.

It could be worse.

Texas health department workers have to manually enter data they read from paper faxes into the state’s coronavirus tracking system, The Texas Tribune has reported. And unlike Florida, Texas doesn’t require local health officials to report viral data to the state in a uniform way that would make it easier and faster to process and report.

It could be better.

In Wisconsin, health officials report the number of cases and deaths down to the neighborhood level. They also plainly report racial and ethnic disparities, which show the disease hits Hispanic residents hardest.

Still, Salemi worries that Florida’s lack of answers can undermine residents’ faith.

“My whole thing is the communication, the transparency,” Salemi said. “Just let us know what’s going on. That can stop people from assuming the worst. Even if you make a big error people are a lot more forgiving, whereas if the only time you’re communicating is when bad things happen … people start to wonder.”

Missouri’s COVID-19 data reports send ‘dangerous message to the community,’ say health systems

Marion County reports six additional COVID-19 cases | KHQA

A group of health system leaders in Missouri challenged state-reported hospital bed data, saying it could lead to a misunderstanding about hospital capacity, according to a Nov. 19 report in the St. Louis Business Journal.

A consortium of health systems, including St. Louis-based BJC HealthCare, Mercy, SSM Health and St. Luke’s Hospital, released urgent reports warning that hospital and ICU beds are nearing capacity while state data reports show a much different story.

The state reports, based on data from TeleTracking and the CDC-managed National Healthcare Safety Network, show inpatient hospital bed capacity at 35 percent and remaining ICU bed capacity at 29 percent on Nov. 19. However, the consortium reported hospitals are fuller, at 84 percent capacity as of Nov. 18, and ICUs at 90 percent capacity based on staffed bed availability. The consortium says it is using staffed bed data while the state’s numbers are based on licensed bed counts; the state contends it does take staffing into account, according to the report.

Stephanie Zoller Mueller, a spokesperson for the consortium, said the discrepancy between the state’s data and consortium’s data could create a “gross misunderstanding on the part of some and can be a dangerous message to the community.”