Facing a “new normal” of higher labor costs

https://mailchi.mp/161df0ae5149/the-weekly-gist-december-10-2021?e=d1e747d2d8

The price of higher labor costs in the consumer discretionary sector -  AlphaSense

Attending a recent executive retreat with one of our member health systems, we heard the CEO make a statement that really resonated with us. Referring to the current workforce crisis—pervasive shortages, pressure to increase compensation, outsized reliance on contract labor to fill critical gaps—the CEO made the assertion that this situation isn’t temporary. Rather, it’s the “new normal”, at least for the next several years.

The Great Resignation that’s swept across the American economy in the wake of COVID has not spared healthcare; every system we talk to is facing alarmingly high vacancy rates as nurses, technicians, and other staff head for the exits. The CEO made a compelling case that the labor cost structure of the system has reset at a level between 20 and 30 percent more expensive than before the pandemic, and executives should begin to turn attention away from stop-gap measures (retention bonuses and the like) to more permanent solutions (rethinking care models, adjusting staffing ratios upward, implementing process automation).

That seemed like an important insight to us. It’s increasingly clear as we approach a third year of the pandemic: there is no “post-COVID world” in which things will go back to normal. Rather, we’ll have to learn to live in the “new normal,” revisiting basic assumptions about how, where, and by whom care is delivered.

If hospital labor costs have indeed permanently reset at a higher level, that implies the need for a radical restructuring of the fundamental economic model of the health systemrazor-thin margins won’t allow for business to continue as usual. Long overdue, perhaps, and a painful evolution for sure—but one that could bring the industry closer to the vision of “right care, right place, right time” promised by population health advocates for over a decade.

Bringing bots into the health system

https://mailchi.mp/95e826d2e3bc/the-weekly-gist-august-28-2020?e=d1e747d2d8

Robotic Process Automation – Everything You Need to Know - Part 1 -  ITChronicles

This week we hosted a member webinar on an application of artificial intelligence (AI) that’s generating a lot of buzz these days in healthcare—robotic process automation (RPA).

That bit of tech jargon translates to finding repetitive, often error-prone tasks performed by human staff, and implementing “bots” to perform them instead. The benefit? Fewer mistakes, the ability to redeploy talent to less “mindless” work (often with the unexpected benefit of improving employee engagement), and the potential to capture substantial efficiencies. That last feature makes RPA especially attractive in the current environment, in which systems are looking for any assistance in lowering operating expenses. 

Typical processes where RPA can be used to augment human staff include revenue cycle tasks like managing prior authorization, simplifying claims processing, and coordinating patient scheduling. Indeed, the health insurance industry is far ahead of the provider community in implementing these machine-driven approaches to productivity improvement.

We heard early “lessons learned” from one member system, Fountain Valley, CA-based MemorialCare, who’s been working with Columbus, OH-based Olive.ai, which bills itself as the only “AI as a service” platform built exclusively for healthcare.

Listening to their story, we were particularly struck by the fact that RPA is far more than “just” another IT project with an established start and finish, but rather an ongoing strategic effort. MemorialCare has been particularly thoughtful about involving senior leaders in finance, operations, and HR in identifying and implementing their RPA strategy, making sure that cross-functional leaders are “joined at the hip” to manage what could prove to be a truly revolutionary technology.

Having identified scores of potential applications for RPA, they’re taking a deliberate approach to rollout for the first dozen or so applications. One critical step: ensuring that processes are “optimized” (via lean or other process improvement approaches) before they are “automated”. MemorialCare views RPA implementation as an opportunity to catalyze the organization for change—“It’s not often that one solution can help push the entire system forward,” in the words of one senior system executive.

We’ll be keeping an eye on this burgeoning space for interesting applications, as health systems identify new ways to deploy “the bots” across the enterprise.

 

 

 

 

Did a high-profile program really slash hospital spending? Or was it a cautionary tale of ‘regression to the mean’?

 

Did a high-profile program really slash hospital spending? Or was it a cautionary tale of ‘regression to the mean’?

Image result for regression to the mean

In the late 19th century, English polymath Sir Francis Galton noted that tall parents often had kids shorter than they were, while short parents often ended up with taller kids. He dubbed this regression to the mean — when something measured as extreme in a first instance is likely to be measured as less extreme later on.

That concept has important implications for health care policy today, one of which is that more health policymakers and health care researchers should use randomized evaluations to avoid problems of regression to the mean in estimating the effects of policies.

In the U.S. health care system, the very highest-cost patients — known as super-utilizers — have been a focus of attention. That is because this 1% of patients account for almost 25% of all U.S. health care spending. A spate of high-profile studies have reported dramatic reductions in health care spending from programs designed to keep super-utilizers out of the hospital through various means, such as coordinating their outpatient care and coaching them on managing their conditions and medications.

 

This work raises an important question: Does hospital use decline because of the programs or, due to regression to the mean, because high-use patients are likely to use care less in the future?

Several colleagues and I set out to answer that question in partnership with the Camden Coalition of Healthcare Providers. It had created a comprehensive health care delivery model that aims to meet the medical and social services needs of very high-use patients who have had at least two hospital admissions in the last six months and two or more chronic conditions, among other criteria. The coalition has been widely heralded as a promising approach for reducing costs and improving health. Dr. Atul Gawande profiled the program in the New Yorker and the  coalition’s founder won a MacArthur “genius grant.”

As a data-driven, learning organization, the coalition did not want to rest on its considerable laurels. To learn what its program was doing — and innovate based on the findings — it partnered with our research team to conduct a randomized controlled trial (RCT).

We randomly assigned patients who were eligible and who consented to participate to receive either the coalition’s program or status quo care. Randomization ensured that, at the start of the program, these two groups were similar. That way, the outcomes observed in the control group would tell us what would have happened over time in the intervention group in the absence of the program.

When we looked at patients in the intervention group, the results of the Camden Coalition’s program looked very encouraging: Participants in this group visited the hospital about 40% less in the six months after the intervention. But as we report in this week’s New England Journal of Medicine, we saw the same decline in hospital use among those in the control group. These results tell us that the improvements we saw in the intervention group were the result of regression to the mean, not the coalition’s program.

 

These results offer an important lesson: We wouldn’t have accurately measured the intervention’s impact if we hadn’t done a randomized controlled trial.

Since we learn more from RCTs than just the impact of an intervention on a single outcome, finding no effect doesn’t mean the end of the road. In the Camden Coalition trial, our results suggest that existing systems poorly serve the complex needs of the coalition’s patients. The Camden group (and others) are now exploring models involving more complete designs for providing care.

Regression to the mean isn’t unique to health care, but it is a particularly salient concern for studies of health care programs that are often (and understandably) implemented in response to extreme signals like advanced disease, high expenditures, or excessive prescribing. Fortunately, when randomized controlled trials are feasible and ethical, they provide a way to determine the effect of a program free from concerns about regression to the mean and other biases.

Concern about excessive prescribing presents another example where regression to the mean may lead to spurious findings but where an RCT can provide clear results. The Centers for Medicare and Medicaid Services recently partnered with researchers to conduct randomized evaluations of interventions designed to curb overprescribing of Seroquel, an antipsychotic drug. The researchers found that sending strongly worded letters that compared high prescribers’ behavior to their peers’ reduced overprescribing by 11%.

We can be confident that the letters are what caused the reduction in prescribing — rather than just regression to the mean (today’s extreme prescribers are less likely to be as extreme tomorrow) — because the trial included as a randomized control group prescribers who only received standard CMS outreach.

That study also shows how we can build on and learn from any finding, whether it is positive, negative, or null. The CMS overprescribing study built on a prior randomized controlled trial which found that the original peer comparison letters CMS had been regularly sending did not reduce prescribing of controlled substances. As a result, the researchers and CMS used psychological and other research to innovate and devise a different kind of letter to be sent to a different set of providers, which then did reduce prescribing behavior.

 

Randomized controlled trials can be used to study programs and policies across the health care industry. In my experience leading J-PAL North America’s U.S. Health Care Delivery Initiative, which funds and conducts randomized controlled trials of health care delivery interventions, RCTs have shed light on issues such as the effectiveness of clinical decision support alerts on ordering inappropriate medical imaging and nudges to improve consumers’ choices of health insurance. And there are ongoing RCTs of many more interventions, including food as medicine, home visits by nurses, and opioid buyback programs.

J-PAL North America is part of a growing movement of health systems, payers, providers, and more that are using randomized controlled trials to test and learn, whether through evaluations of whole programs or quick process improvements. Researchers at NYU Langone Health use rapid-cycle, randomized tests aimed at quickly evaluating simple process improvements to encourage best practices. This one medical center launched 10 trials in the first year alone and hopes to launch dozens more.

Finding solutions to address the complex medical and social needs of patients is a pressing issue. Yet all too often we don’t rigorously evaluate these solutions, which hurts patients we could be helping. Randomized clinical trials are essential tools for helping us learn, adapt, and move forward on innovative solutions that make peoples’ lives better.