Are healthcare jobs safe from AI? More so than many might think

No occupation will be unaffected by the technology, but healthcare will be affected less than other industries, owing much to its inherent complexity

Across the country and across industries, workers are nervous that automation and artificial intelligence will eventually take over their jobs. For some, those fears may be grounded in reality.

Healthcare, however, looks like it will be largely safe from that trend, a new report from the Brookings Metropolitan Policy Program finds.

Examining a chunk of time from the 1980s to 2016, the piece tracks the historical evolution of the technology and uses those findings to project forward to 2030.

The verdict? AI will replace jobs in various industries, but not so much in healthcare.


AI is projected to be an increasingly common form of automation, and the report claims the effects should be manageable in the aggregate labor market. Uncertainty remains, of course, and the effects will vary greatly — across geography, demographics and occupations.

Overall, though, only about 25 percent of U.S. jobs are at a high risk of replacement by automation. That translates to about 36 million jobs, based on 2016 data.

A higher percentage, 36 percent, are at medium risk (52 million jobs) while the largest group is the low-risk group, at 39 percent (57 million jobs).

Most of healthcare belongs in the medium-to-low categories, largely driven by the complexity of healthcare jobs. Still, the risk varies wildly. Medical assistants have what the report calls “automation potential” of 54 percent, but home health aids have just an 8 percent automation potential. Registered nurses sit somewhere in between, at 54 percent.

For healthcare support occupations, the number is closer to 49 percent; healthcare practitioners and technical jobs have 33 percent automation potential.


The report emphasizes that while some occupations will be safer from automation than others, no industry will be unaffected totally. Mundane tasks will be the most vulnerable.

Fortunately for those in the industry, there’s little in healthcare that’s mundane. AI and machine learning algorithms tend to rely on large quantities of data to be effective, and that data needs human hands to collect it and human eyes to analyze it.

And since AI in healthcare is currently utilized mainly to aggregate and organize data — looking for trends and patterns and making recommendations — a human component is very much needed, an opinion shared by several experts, who point out that empathy are reasoning skills are required in the field.



PwC names 6 healthcare issues to watch in 2019

Image result for 2019 healthcare trends

PwC’s Health Research Institute believes 2019 is the year the “New Health Economy” will finally become a reality.

The past year marked record interest in the healthcare industry, especially from outside forces like venture capitalists and business giants like Amazon, Berkshire Hathaway and JP Morgan Chase. PwC believes forces like these mean healthcare will no longer be an “outlier” industry that operates in its own world outside the greater U.S. economy.

In its 13th annual report, PwC’s HRI identified the following six healthcare trends to watch in 2019:

1. With an injection of $12.5 billion from investors over the past two years, PwC expects connected health devices and digital therapies to become integrated into care delivery and the regulatory process for drug and device approvals. PwC expects several new products to come to market in this category in 2019. What does this mean for providers? They will need to find a way to integrate this data into the EHR so it can be used to maximize the patient visit.  

2. Artificial intelligence and automation will require healthcare organizations to invest in and train their workforce to succeed in a digital economy. Almost half (45 percent) of executives surveyed by PwC’s HRI said skill deficiencies among their workforce are holding their organization back, yet few employers are offering training in AI, robotics and automation or data analytics.

3. The 2017 Tax Cuts and Jobs Act will continue to create tax savings for healthcare organizations while creating new challenges. Providers are likely to feel the biggest challenges via changes to unrelated business taxable income, which could create new expenses. Academic medical centers may also feel minor negative pressure from the net investment excise tax on educational foundations.

4. The healthcare industry is ready for its own budget airline provider. It needs a disruptor that is low-cost, transparent, informed by technology and “laser-focused on the consumer” like Southwest Airlines, according to PwC. Organizations that answer this call are starting to emerge — like a profitable, Medicaid-focused, walk-in-only family medicine practice in Denver — but progress is slow and there isn’t one simple formula to follow. PwC advises healthcare organizations to look for patient segments that need a “budget airline” and determine how to meet those needs.

5. The pace of private equity investment is expected to accelerate as healthcare companies continue to divest noncore business units to investors next year. It also expects PE-healthcare partnerships to evolve, with some healthcare companies co-investing in their own spinoffs. PwC suggested healthcare organizations pursue PE partnerships not only for financing, but also for PE firms’ ability to provide strategic views of trends across their portfolio of investments.

6. Republican changes to the ACA will shift the law’s winners and losers. Providers are on the losing end of most of these changes, including softened insurance mandates, short-term health insurance plans, less federal support for ACA exchanges and reduced federal Medicaid spending, according to the report.

Download the report here.



What goes into a CFO’s dashboard for artificial intelligence and machine learning

Artificial intelligence and machine learning can be leveraged to improve healthcare outcomes and costs — here’s how to monitor AI.

The use of artificial intelligence in healthcare is still nascent in some respects. Machine learning shows potential to leverage AI algorithms in a way that can improve clinical quality and even financial performance, but the data picture in healthcare is pretty complex. Crafting an effective AI dashboard can be daunting for the uninitiated.

A balance needs to be struck: Harnessing myriad and complex data sets while keeping your goals, inputs and outputs as simple and focused as possible. It’s about more than just having the right software in place. It’s about knowing what to do with it, and knowing what to feed into it in order to achieve the desired result.

In other words, you can have the best, most detailed map in the world, but it doesn’t matter if you don’t have a compass.


Jvion Chief Product Officer John Showalter, MD, said the most important thing an AI dashboard can do is drive action. That means simplifying the outputs, so perhaps two of the components involved are AI components, and the rest is information an organization would need to make a decision.

He’s also a proponent of color coding or iconography to simplify large amounts of information — basic measures that allow people to understand the information very quickly.

“And then to get to actionability, you need to integrate data into the workflow, and you should probably have single sign-on activity to reduce the login burden, so you can quickly look up the information when you need it without going through 40 steps.”

According to Eldon Richards, chief technology officer at Recondo Technology, there have been a number of breakthroughs in AI over the years, such that machine learning and deep learning are often matching, and sometimes exceeding, human capability for certain tasks.

What that means is that dashboards and related software are able to automate things that, as of a few years ago, weren’t feasible with a machine — things like radiology, or diagnosing certain types of cancer.

“When dealing with AI today, that mostly means machine learning. The data vendor trains the model on your needs to match the data you’re going to feed into the system in order to get a correct answer,” Richards said. “An example would be if the vendor trained the model on hospitals that are not like my hospital, and payers unlike who I deal with. They could produce very inaccurate numbers. It won’t work for me.”

A health system would also want to pay close attention to the ways in which AI can fail. The technology can still be a bit fuzzy at times.

“Sometimes it’s not going to be 100 percent accurate,” said Richards. “Humans wouldn’t be either, but it’s the way they fail. AI can fail in ways that are more problematic — for example, if I’m trying to predict cancer, and the algorithm says the patient is clean when they’re not, or it might be cancer when it’s not. In terms of the dashboard, you want to categorize those types of values on data up front, and track those very closely.”


Generally speaking, you want a key performance indicator based around effectiveness. You want a KPI around usage. And you want some kind of KPI that tracks efficiency — Is this saving us time? Are we getting the most bang for the buck?

The revenue cycle offers a relevant example, where the dashboard can be trained to look at something like denials. KPIs that track the efficiency of denials, and the total denials resolved with a positive outcome, can help health systems determine what percentage of the denials were fixed, and how many they got paid for. This essentially tracks the time, effort, and ultimately the efficacy of the AI.

“You start with your biggest needs,” said Showalter. “You talk about sharing outcomes — what are we all working toward, what can we all agree on?”

“Take falls as an example,” Showalter added. “The physician maybe will care about the biggest number of falls, and the revenue cycle guy will care about that and the cost associated with those falls. And maybe the doctors and nurses are less concerned about the costs, but everybody’s concerned about the falls, so that becomes your starting point. Everyone’s focused on the main outcome, and then the sub-outcomes depend on the role.”

It’s that focus on specific outcomes that can truly drive the efficacy of AI and machine learning. Dr. Clemens Suter-Crazzolara, vice president of product management for health and precision medicine at SAP, said it’s helpful to parse data into what he called limited-scope “chunks” — distinct processes a provider would like to tackle with the help of artificial intelligence.

Say a hospital’s focus is preventing antibiotic resistance. “What you then start doing,” said Suter-Crazzolara, “is you say, ‘I have these patients in the hospital. Let’s say there’s a small-scale epidemic. Can I start collecting that data and put that in an AI methodology to make a prediction for the future?’ And then you determine, ‘What is my KPI to measure this?’

“By working on a very distinct scenario, you then have to put in the KPIs,” he said.

PeriGen CEO Matthew Sappern said a good litmus test for whether a health system is integrating AI an an effective way is whether it can be proven that its outcomes are as good as those of an expert. Studies that show the system can generate the same answers as a panel of experts can go a long way toward helping adoption.

The reason that’s so important, he said, is that the accuracy of the tools can be all over the place. The engine is only as good as the data you put into it, and the more data, the better. That’s where electronic health records have been a boon; they’ve generated a huge amount of data.

Even then, though, there can be inconsistencies, and so some kind of human touch is always needed.

“At any given time, something is going on,” said Sappern. “To assume people are going to document in 30-second increments is kind of crazy. So a lot of times nurses and doctors go back and try to recreate what’s on the charts as best they can.

“The problem is that when you go back and do chart reviews, you see things that are impossible. As you curate this data, you really need to have an expert. You need one or two very well-seasoned physicians or radiologists to look for these things that are obviously not possible. You’d be surprised at the amount of unlikely information that exists in EMRs these days.”

Having the right team in place is essential, all the more so because of one of the big misunderstandings around AI: That you can simply dump a bunch of data into a dashboard, press a button, and come back later to see all of its findings. In reality, data curation is painstaking work.

“Machine learning is really well suited to specific challenges,” said Sappern. “It’s got great pattern recognition, but as you are trying to perform tasks that require a lot of reasoning or a lot of empathy, currently AI is not really great at that.

“Whenever we walk into a clinical setting, a nurse or a number of nurses will raise their hands and say, ‘Are you telling me this machine can predict the risk of stroke better than I can?’ And the immediate answer is absolutely not. Every single second the patient is in bed, we will persistently look out for those patterns.”

Another area in which a human touch is needed is in the area of radiological image interpretation. The holy grail, said Suter-Crazzolara, would be to have a supercomputer into which one could feed an x-ray from a cancer patient, and which would then identify the type of cancer present and what the next steps should be.

“The trouble is,” said Suter-Crazzolara, “there’s often a lack of annotated data. You need training sets with thousands of prostate cancer types on these images. The doctor has to sit down with the images and identify exactly what the tumors look like in those pictures. That is very, very hard to achieve.

“Once you have that well-defined, then you can use machine learning and create an algorithm that can do the work. You have to be very, very secure in the experimental setup.”


It’s possible for machine learning to continue to learn the more an organization uses the system, said Richards. Typically, the AI dashboard would provide an answer back to the user, and the user would note anything that’s not quite accurate and correct it, which provides feedback for the software to improve going forward. Richards recommends a dashboard that shows failure rate trends; if it’s doing its job, the failure rate should improve over time.

“AI is a means to an end,” he said. “Stepping back a little bit, if I’m designing a dashboard I might also map out what functions I would apply AI to, and what the coverage looks like. Maybe a heat map showing how I’m doing in cost per transaction.”

Suter-Crazzolara sees these dashboards as the key to creating an intelligent enterprise because it allows providers to innovate and look at data in new ways, which can aid everything from the diagnosis of dementia to detecting fraud and cutting down on supply chain waste.

“AI is at a stage that is very opportune,” he said, “because artificial intelligence and machine learning have been around for a long time, but at the moment we are in this era of big data, so every patient is associated with a huge amount of data. We can unlock this big data much better than in the past because we can create a digital platform that makes it possible to connect and unlock the data, and collaborate on the data. At the moment, you can build very exciting algorithms on top of the data to make sense of that information.”


If a health system decides to tap a vendor to handle its AI and machine learning needs, there are certain things to keep in mind. Typically, vendors will already have models created from certain data sets, which allows the software to perform a function that was learned from that data. If a vendor trained a model with a hospital whose characteristic differ from your own, there can be big differences in the efficacy of those models.

Richards suggested reviewing what data the vendor used to train its model, and to discuss with them how much data they need in order to construct a model with the utmost accuracy. He suggests talking to vendor to understand how well they know your particular space.

“In most cases I think they’ve got a good handle on the technology itself, but they need to know the space and the nuances of it,” said Richards. He would interview them to make sure he was comfortable with their depth of knowledge.

That will ensure the technology works as effectively as possible — an important consideration, since AI likely isn’t going away anytime soon.

“We’re seeing not just the hype, but we’re definitely seeing some valuable results coming,” said Richards. “We’re still somewhat at the beginning of that. Breakthroughs in the space are happening every day.”