Top 20 smart hospitals in the world, ranked by Newsweek

World's Best Smart Hospitals 2021

Rochester, Minn.-based Mayo Clinic was named the best smart hospital in the world in 2021 by Newsweek.

For the list, the magazine partnered with consumer research company Statista to find the 250 hospitals that best equip themselves for success with technology. Newsweek said the hospitals on the list are the ones to watch as they “lead in their use of [artificial intelligence], robotic surgery, digital imaging, telemedicine, smart buildings, information technology infrastructure and EHRs.”

The ranking, published June 9, is based on a survey that included recommendations from national and international sources in five categories: digital surgery, digital imaging, AI, telehealth and EHRs. 

The top 20 smart hospitals in the world:

1. Mayo Clinic

2. The Johns Hopkins Hospital (Baltimore) 

3. Cleveland Clinic

4. The Mount Sinai Hospital (New York City) 

5. Massachusetts General Hospital (Boston)

6. Brigham and Women’s Hospital (Boston)

7. Cedars Sinai (Los Angeles)

8. Karolinska Universitetssjukhuset (Solna, Sweden)

9. MD Anderson Cancer Center (Houston)

10. Charité-Universitätsmedizin Berlin

11. Memorial Sloan Kettering Cancer Center (New York City)

12. Houston Methodist Hospital

13. Sheba Medical Center (Ramat Gan, Israel)

14. NewYork-Presbyterian Hospital (New York City)

15. Beth Israel Deaconess Medical Center (Boston)

16. Boston Medical Center 

17. Abbott Northwestern Hospital (Minneapolis)

18. Stanford (Calif.) Health Care

19.  Aarhus Universitetshospital (Aarhus, Denmark)

20. AP-HP-Hôpital Européen Georges Pompidou (Paris)

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

In scramble to respond to Covid-19, hospitals turned to models with high  risk of bias - MedCity News

Of 26 health systems surveyed by MedCity News, nearly half used automated tools to respond to the Covid-19 pandemic, but none of them were regulated. Even as some hospitals continued using these algorithms, experts cautioned against their use in high-stakes decisions.

A year ago, Michigan Medicine faced a dire situation. In March of 2020, the health system predicted it would have three times as many patients as its 1,000-bed capacity — and that was the best-case scenario. Hospital leadership prepared for this grim prediction by opening a field hospital in a nearby indoor track facility, where patients could go if they were stable, but still needed hospital care. But they faced another predicament: How would they decide who to send there?

Two weeks before the field hospital was set to open, Michigan Medicine decided to use a risk model developed by Epic Systems to flag patients at risk of deterioration. Patients were given a score of 0 to 100, intended to help care teams determine if they might need an ICU bed in the near future. Although the model wasn’t developed specifically for Covid-19 patients, it was the best option available at the time, said Dr. Karandeep Singh, an assistant professor of learning health sciences at the University of Michigan and chair of Michigan Medicine’s clinical intelligence committee. But there was no peer-reviewed research to show how well it actually worked.

Researchers tested it on over 300 Covid-19 patients between March and May. They were looking for scores that would indicate when patients would need to go to the ICU, and if there was a point where patients almost certainly wouldn’t need intensive care.

“We did find a threshold where if you remained below that threshold, 90% of patients wouldn’t need to go to the ICU,” Singh said. “Is that enough to make a decision on? We didn’t think so.”

But if the number of patients were to far exceed the health system’s capacity, it would be helpful to have some way to assist with those decisions.

“It was something that we definitely thought about implementing if that day were to come,” he said in a February interview.

Thankfully, that day never came.

The survey
Michigan Medicine is one of 80 hospitals contacted by MedCity News between January and April in a survey of decision-support systems implemented during the pandemic. 
Of the 26 respondents, 12 used machine learning tools or automated decision systems as part of their pandemic response. Larger hospitals and academic medical centers used them more frequently.

Faced with scarcities in testing, masks, hospital beds and vaccines, several of the hospitals turned to models as they prepared for difficult decisions. The deterioration index created by Epic was one of the most widely implemented — more than 100 hospitals are currently using it — but in many cases, hospitals also formulated their own algorithms.

They built models to predict which patients were most likely to test positive when shortages of swabs and reagents backlogged tests early in the pandemic. Others developed risk-scoring tools to help determine who should be contacted first for monoclonal antibody treatment, or which Covid patients should be enrolled in at-home monitoring programs.

MedCity News also interviewed hospitals on their processes for evaluating software tools to ensure they are accurate and unbiased. Currently, the FDA does not require some clinical decision-support systems to be cleared as medical devices, leaving the developers of these tools and the hospitals that implement them responsible for vetting them.

Among the hospitals that published efficacy data, some of the models were only evaluated through retrospective studies. This can pose a challenge in figuring out how clinicians actually use them in practice, and how well they work in real time. And while some of the hospitals tested whether the models were accurate across different groups of patients — such as people of a certain race, gender or location — this practice wasn’t universal.

As more companies spin up these models, researchers cautioned that they need to be designed and implemented carefully, to ensure they don’t yield biased results.

An ongoing review of more than 200 Covid-19 risk-prediction models found that the majority had a high risk of bias, meaning the data they were trained on might not represent the real world.

“It’s that very careful and non-trivial process of defining exactly what we want the algorithm to be doing,” said Ziad Obermeyer, an associate professor of health policy and management at UC Berkeley who studies machine learning in healthcare. “I think an optimistic view is that the pandemic functions as a wakeup call for us to be a lot more careful in all of the ways we’ve talked about with how we build algorithms, how we evaluate them, and what we want them to do.”

Algorithms can’t be a proxy for tough decisions
Concerns about bias are not new to healthcare. In a paper published two years ago
, Obermeyer found a tool used by several hospitals to prioritize high-risk patients for additional care resources was biased against Black patients. By equating patients’ health needs with the cost of care, the developers built an algorithm that yielded discriminatory results.

More recently, a rule-based system developed by Stanford Medicine to determine who would get the Covid-19 vaccine first ended up prioritizing administrators and doctors who were seeing patients remotely, leaving out most of its 1,300 residents who had been working on the front lines. After an uproar, the university attributed the errors to a “complex algorithm,” though there was no machine learning involved.

Both examples highlight the importance of thinking through what exactly a model is designed to do — and not using them as a proxy to avoid the hard questions.

“The Stanford thing was another example of, we wanted the algorithm to do A, but we told it to do B. I think many health systems are doing something similar,” Obermeyer said. “You want to give the vaccine first to people who need it the most — how do we measure that?”

The urgency that the pandemic created was a complicating factor.  With little information and few proven systems to work with in the beginning, health systems began throwing ideas at the wall to see what works. One expert questioned whether people might be abdicating some responsibility to these tools.

“Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted,” said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview. “Tough decisions are going to be made, I don’t think there are any doubts about that. But what are those tough decisions? We don’t actually name what constraints we’re hitting up against.”

The wild, wild west
There currently is no gold standard for how hospitals should implement machine learning tools, and little regulatory oversight for models designed to support physicians’ decisions, resulting in an environment that Mathur described as the “wild, wild west.”

How these systems were used varied significantly from hospital to hospital.

Early in the pandemic, Cleveland Clinic used a model to predict which patients were most likely to test positive for the virus as tests were limited. Researchers developed it using health record data from more than 11,000 patients in Ohio and Florida, including 818 who tested positive for Covid-19. Later, they created a similar risk calculator to determine which patients were most likely to be hospitalized for Covid-19, which was used to prioritize which patients would be contacted daily as part of an at-home monitoring program.

Initially, anyone who tested positive for Covid-19 could enroll in this program, but as cases began to tick up, “you could see how quickly the nurses and care managers who were running this program were overwhelmed,” said Dr. Lara Jehi, Chief Research Information Officer at Cleveland Clinic. “When you had thousands of patients who tested positive, how could you contact all of them?”

While the tool included dozens of factors, such as a patient’s age, sex, BMI, zip code, and whether they smoked or got their flu shot, it’s also worth noting that demographic information significantly changed the results. For example, a patient’s race “far outweighs” any medical comorbidity when used by the tool to estimate hospitalization risk, according to a paper published in Plos One.  Cleveland Clinic recently made the model available to other health systems.

Others, like Stanford Health Care and 731-bed Santa Clara County Medical Center, started using Epic’s clinical deterioration index before developing their own Covid-specific risk models. At one point, Stanford developed its own risk-scoring tool, which was built using past data from other patients who had similar respiratory diseases, such as the flu, pneumonia, or acute respiratory distress syndrome. It was designed to predict which patients would need ventilation within two days, and someone’s risk of dying from the disease at the time of admission.

Stanford tested the model to see how it worked on retrospective data from 159 patients that were hospitalized with Covid-19, and cross-validated it with Salt Lake City-based Intermountain Healthcare, a process that took several months. Although this gave some additional assurance — Salt Lake City and Palo Alto have very different populations, smoking rates and demographics — it still wasn’t representative of some patient groups across the U.S.

“Ideally, what we would want to do is run the model specifically on different populations, like on African Americans or Hispanics and see how it performs to ensure it’s performing the same for different groups,” Tina Hernandez-Boussard, an associate professor of medicine, biomedical data science and surgery at Stanford, said in a February interview. “That’s something we’re actively seeking. Our numbers are still a little low to do that right now.”

Stanford planned to implement the model earlier this year, but ultimately tabled it as Covid-19 cases fell.

‘The target is moving so rapidly’
Although large medical centers were more likely to have implemented automated systems, there were a few notable holdouts. For example, UC San Francisco Health, Duke Health and Dignity Health all said they opted not to use risk-prediction models or other machine learning tools in their pandemic responses.

“It’s pretty wild out there and I’ll be honest with you —  the dynamics are changing so rapidly,” said Dr. Erich Huang, chief officer for data quality at Duke Health and director of Duke Forge. “You might have a model that makes sense for the conditions of last month but do they make sense for the conditions of next month?”

That’s especially true as new variants spread across the U.S., and more adults are vaccinated, changing the nature and pace of the disease. But other, less obvious factors might also affect the data. For instance, Huang pointed to big differences in social mobility across the state of North Carolina, and whether people complied with local restrictions. Differing social and demographic factors across communities, such as where people work and whether they have health insurance, can also affect how a model performs.

“There are so many different axes of variability, I’d feel hard pressed to be comfortable using machine learning or AI at this point in time,” he said. “We need to be careful and understand the stakes of what we’re doing, especially in healthcare.”

Leadership at one of the largest public hospitals in the U.S., 600-bed LAC+USC Medical Center in Los Angeles, also steered away from using predictive models, even as it faced an alarming surge in cases over the winter months.

At most, the hospital used alerts to remind physicians to wear protective equipment when a patient has tested positive for Covid-19.

“My impression is that the industry is not anywhere near ready to deploy fully automated stuff just because of the risks involved,” said Dr. Phillip Gruber, LAC+USC’s chief medical information officer. “Our institution and a lot of institutions in our region are still focused on core competencies. We have to be good stewards of taxpayer dollars.”

When the data itself is biased
Developers have to contend with the fact that any model developed in healthcare will be biased, because the data itself is biased; how people access and interact with health systems in the U.S. is fundamentally unequal.

How that information is recorded in electronic health record systems (EHR) can also be a source of bias, NYU’s Mathur said. People don’t always self-report their race or ethnicity in a way that fits neatly within the parameters of an EHR. Not everyone trusts health systems, and many people struggle to even access care in the first place.

“Demographic variables are not going to be sharply nuanced. Even if they are… in my opinion, they’re not clean enough or good enough to be nuanced into a model,” Mathur said.

The information hospitals have had to work with during the pandemic is particularly messy. Differences in testing access and missing demographic data also affect how resources are distributed and other responses to the pandemic.

“It’s very striking because everything we know about the pandemic is viewed through the lens of number of cases or number of deaths,” UC Berkeley’s Obermeyer said. “But all of that depends on access to testing.”

At the hospital level, internal data wouldn’t be enough to truly follow whether an algorithm to predict adverse events from Covid-19 was actually working. Developers would have to look at social security data on mortality, or whether the patient went to another hospital, to track down what happened.

“What about the people a physician sends home —  if they die and don’t come back?” he said.

Researchers at Mount Sinai Health System tested a machine learning tool to predict critical events in Covid-19 patients —  such as dialysis, intubation or ICU admission — to ensure it worked across different patient demographics. But they still ran into their own limitations, even though the New York-based hospital system serves a diverse group of patients.

They tested how the model performed across Mount Sinai’s different hospitals. In some cases, when the model wasn’t very robust, it yielded different results, said Benjamin Glicksberg, an assistant professor of genetics and genomic sciences at Mount Sinai and a member of its Hasso Plattner Institute for Digital Health.

They also tested how it worked in different subgroups of patients to ensure it didn’t perform disproportionately better for patients from one demographic.

“If there’s a bias in the data going in, there’s almost certainly going to be a bias in the data coming out of it,” he said in a Zoom interview. “Unfortunately, I think it’s going to be a matter of having more information that can approximate these external factors that may drive these discrepancies. A lot of that is social determinants of health, which are not captured well in the EHR. That’s going to be critical for how we assess model fairness.”

Even after checking for whether a model yields fair and accurate results, the work isn’t done yet. Hospitals must continue to validate continuously to ensure they’re still working as intended — especially in a situation as fast-moving as a pandemic.

A bigger role for regulators
All of this is stirring up a broader discussion about how much of a role regulators should have in how decision-support systems are implemented.

Currently, the FDA does not require most software that provides diagnosis or treatment recommendations to clinicians to be regulated as a medical device. Even software tools that have been cleared by the agency lack critical information on how they perform across different patient demographics. 

Of the hospitals surveyed by MedCity News, none of the models they developed had been cleared by the FDA, and most of the external tools they implemented also hadn’t gone through any regulatory review.

In January, the FDA shared an action plan for regulating AI as a medical device. Although most of the concrete plans were around how to regulate algorithms that adapt over time, the agency also indicated it was thinking about best practices, transparency, and methods to evaluate algorithms for bias and robustness.

More recently, the Federal Trade Commission warned that it could crack down on AI bias, citing a paper that AI could worsen existing healthcare disparities if bias is not addressed.

“My experience suggests that most models are put into practice with very little evidence of their effects on outcomes because they are presumed to work, or at least to be more efficient than other decision-making processes,” Kellie Owens, a researcher for Data & Society, a nonprofit that studies the social implications of technology, wrote in an email. “I think we still need to develop better ways to conduct algorithmic risk assessments in medicine. I’d like to see the FDA take a much larger role in regulating AI and machine learning models before their implementation.”

Developers should also ask themselves if the communities they’re serving have a say in how the system is built, or whether it is needed in the first place. The majority of hospitals surveyed did not share with patients if a model was used in their care or involve patients in the development process.

In some cases, the best option might be the simplest one: don’t build.

In the meantime, hospitals are left to sift through existing published data, preprints and vendor promises to decide on the best option. To date, Michigan Medicine’s paper is still the only one that has been published on Epic’s Deterioration Index.

Care teams there used Epic’s score as a support tool for its rapid response teams to check in on patients. But the health system was also looking at other options.

“The short game was that we had to go with the score we had,” Singh said. “The longer game was, Epic’s deterioration index is proprietary. That raises questions about what is in it.”

Healthcare AI investment will shift to these 5 areas in the next 2 years: survey

The COVID-19 pandemic has accelerated the pace of artificial intelligence adoption, and healthcare leaders are confident AI can help solve some of today’s toughest challenges, including COVID-19 tracking and vaccines.

The majority of healthcare and life sciences executives (82%) want to see their organizations more aggressively adopt AI technology, according to a new survey from KPMG, an audit, tax and advisory services firm.

Healthcare and life sciences (56%) business leaders report that AI initiatives have delivered more value than expected for their organizations. However, life sciences companies seem to be struggling to select the best AI technologies, according to 73% of executives.

As the U.S. continues to navigate the pandemic, life sciences business leaders are overwhelmingly confident in AI’s ability to monitor the spread of COVID-19 cases (94%), help with vaccine development (90%) and aid vaccine distribution (90%).

KPMG’s AI survey is based on feedback from 950 business or IT decision-makers across seven industries, with 100 respondents each from healthcare and life sciences companies.

Despite the optimism about the potential for AI, executives across industries believe more controls are needed and overwhelmingly believe the government has a role to play in regulating AI technology. The majority of life sciences (86%) and healthcare (84%) executives say the government should be involved in regulating AI technology.

And executives across industries are optimistic about the new administration in Washington, D.C., with the majority believing the Biden administration will do more to help advance the adoption of AI in the enterprise.

“We are seeing very high levels of support this year across all industries for more AI regulation. One reason for this may be that, as the technology advances very quickly, insiders want to avoid AI becoming the ‘Wild Wild West.’ Additionally, a more robust regulatory environment may help facilitate commerce. It can help remove unintended barriers that may be the result of other laws or regulations, or due to lack of maturity of legal and technical standards,” said Rob Dwyer, principal, advisory at KPMG, specializing in technology in government.

Healthcare and pharma companies seem to be more bullish on AI than other industries are.

The survey found half of business leaders in industrial manufacturing, retail and tech say AI is moving faster than it should in their industry. Concerns about the speed of AI adoption are particularly pronounced among small companies (63%), business leaders with high AI knowledge (51%) and Gen Z and millennial business leaders (51%).

“Leaders are experiencing COVID-19 whiplash, with AI adoption skyrocketing as a result of the pandemic. But many say it’s moving too fast. That’s probably because of current debate surrounding the ethics, governance and regulation of AI.  Many business leaders do not have a view into what their organizations are doing to control and govern AI and may fear risks are developing,” Traci Gusher, principal of artificial intelligence at KPMG, said in a statement.

Future AI investment

Healthcare organizations are ramping up their investments in AI in response to the COVID-19 pandemic. In a Deloitte survey, nearly 3 in 4 healthcare organizations said they expect to increase their AI funding, with executives citing making processes more efficient as the top outcome they are trying to achieve with AI.

Healthcare executives say current AI investments at their organizations have focused on electronic health record (EHR) management and diagnosis.

To date, the technology has proved its value in reducing errors and improving medical outcomes for patients, according to executives. Around 40% of healthcare executives said AI technology has helped with patient engagement and also to improve clinical quality. About a third of executives said AI has improved administrative efficiency. Only 18% said the technology helped uncover new revenue opportunities.

But AI investments will shift over the next two years to prioritize telemedicine (38%), robotic tasks such as process automation (37%) and delivery of patient care (36%), the survey found. Clinical trials and diagnosis rounded out the top five investment areas.

At life sciences companies, AI is primarily deployed during the drug development process to improve record-keeping and the application process, the survey found. Companies also have leveraged AI to help with clinical trial site selection.

Moving forward, pharmaceutical companies will likely focus their AI investments on discovering new revenue opportunities in the next two years, a pivot from their current strategy focusing on increasing profitability of existing products, according to the survey. About half of life sciences executives say their organizations plan to leverage AI to reduce administrative costs, analyze patient data and accelerate clinical trials.

Industry stakeholders are taking steps to advance the use of AI and machine learning in healthcare.

The Consumer Technology Association (CTA) created a working group two years ago to develop some standardization on definitions and characteristics of healthcare AI. Last year, the CTA working group developed a standard that creates a common language so industry stakeholders can better understand AI technologies. A group also recently developed a new standard to advance trust in AI solutions.

On the regulatory front, the U.S. Food and Drug Administration (FDA) last month released its first AI and machine learning action plan, a multistep approach designed to advance the agency’s management of advanced medical software. The action plan aims to force manufacturers to be more rigorous in their evaluations, according to the FDA.

Bringing bots into the health system

https://mailchi.mp/95e826d2e3bc/the-weekly-gist-august-28-2020?e=d1e747d2d8

Robotic Process Automation – Everything You Need to Know - Part 1 -  ITChronicles

This week we hosted a member webinar on an application of artificial intelligence (AI) that’s generating a lot of buzz these days in healthcare—robotic process automation (RPA).

That bit of tech jargon translates to finding repetitive, often error-prone tasks performed by human staff, and implementing “bots” to perform them instead. The benefit? Fewer mistakes, the ability to redeploy talent to less “mindless” work (often with the unexpected benefit of improving employee engagement), and the potential to capture substantial efficiencies. That last feature makes RPA especially attractive in the current environment, in which systems are looking for any assistance in lowering operating expenses. 

Typical processes where RPA can be used to augment human staff include revenue cycle tasks like managing prior authorization, simplifying claims processing, and coordinating patient scheduling. Indeed, the health insurance industry is far ahead of the provider community in implementing these machine-driven approaches to productivity improvement.

We heard early “lessons learned” from one member system, Fountain Valley, CA-based MemorialCare, who’s been working with Columbus, OH-based Olive.ai, which bills itself as the only “AI as a service” platform built exclusively for healthcare.

Listening to their story, we were particularly struck by the fact that RPA is far more than “just” another IT project with an established start and finish, but rather an ongoing strategic effort. MemorialCare has been particularly thoughtful about involving senior leaders in finance, operations, and HR in identifying and implementing their RPA strategy, making sure that cross-functional leaders are “joined at the hip” to manage what could prove to be a truly revolutionary technology.

Having identified scores of potential applications for RPA, they’re taking a deliberate approach to rollout for the first dozen or so applications. One critical step: ensuring that processes are “optimized” (via lean or other process improvement approaches) before they are “automated”. MemorialCare views RPA implementation as an opportunity to catalyze the organization for change—“It’s not often that one solution can help push the entire system forward,” in the words of one senior system executive.

We’ll be keeping an eye on this burgeoning space for interesting applications, as health systems identify new ways to deploy “the bots” across the enterprise.

 

 

 

 

Geisinger taps Siemens as strategic partner to provide diagnostic imaging, AI applications

https://www.fiercehealthcare.com/tech/geisinger-taps-siemens-as-strategic-partner-to-provide-diagnostic-imaging-ai-applications

Geisinger taps Siemens as strategic partner to provide diagnostic ...

Geisinger Health System has inked a 10-year technology agreement with Siemens Healthineers to access diagnostic imaging equipment and artificial intelligence applications.

The Danville, Pennsylvania-based health system said the partnership will advance and support elements of its strategic priorities related to continually improving care for their patients, communities and the region.

The medical technology company will provide Geisinger access to its latest digital health innovations, diagnostic imaging equipment and on-site staff to support improvements. Education and workflow resources will also be available, which will provide Geisinger staff with the ability to efficiently make decisions and continually optimize workflows, the companies said.

Siemens provides AI-based radiology software that analyzes chest CT scans, brain MRIs and other images as well as AI-based clinical decision support tools and services to help advance digitization.

Financial terms of the deal were not disclosed.

“By expanding our relationship with Geisinger, this becomes one of the largest value partnership relationships in North America and will allow us to work together to improve the patient experience for residents of Pennsylvania and the region,” said David Pacitti, president and head of the Americas for Siemens Healthineers, in a statement.

“Making better health easier by bringing world-class care close to home is central to everything we do at Geisinger,” said Matthew Walsh, chief operating officer at Geisinger. “This partnership will allow us to continue to equip our facilities with the most advanced diagnostic imaging technology in the market to care for our patients.”

Michael Haynes, associate vice president of operations, Geisinger Radiology, said the collaboration with Siemens will enable the health system to identify and respond to health concerns more quickly.

Geisinger operates 13 hospitals across Pennsylvania and New Jersey as well as a 600,000-member health plan, two research centers and the Geisinger Commonwealth School of Medicine. 

Partnerships between health systems and tech companies are becoming fairly common as the healthcare industry pushes forward to use data analytics, AI and machine learning to improve clinical diagnosis and better predict disease.

Mayo Clinic announced a high-profile, 10-year strategic partnership with Google in September to use advanced cloud computing, data analytics, machine learning and AI to advance the diagnosis and treatment of disease.

Providence St. Joseph Health inked a multiyear strategic alliance with Microsoft to modernize its health IT infrastructure and leverage cloud and AI technologies.

 

 

 

 

An optimistic view from health system workforce leaders

https://mailchi.mp/9f24c0f1da9a/the-weekly-gist-june-5-2020?e=d1e747d2d8

Aldous Huxley and Brave New World: The Dark Side of Pleasure

Continuing our series of Gist member convenings to discuss the “Brave New World” that awaits in the post-pandemic era, we brought together a group of senior human resources and nursing executives this week for a Zoom roundtable.

Several themes emerged from the discussion. First, there was general consensus that the COVID crisis exposed a workforce that had become over-specialized and inflexible. Said one chief nursing officer, “Our workforce is much more brittle than we thought.” A key lesson learned is the need for increased cross-training—especially for nurses, and especially in critical care. Systems should work now to increase the supply of nurses comfortable in an ICU environment to enable hospitals to flex staff across settings and roles to deal with future waves of the virus.

Not surprisingly, layoffs were top-of-mind for many. Executives were of one mind on the need to safeguard clinical staff as much as possible, and many systems are now considering deep cuts to management and administrative ranks: “It’s easier to stand in front of your clinical staff and be able to say you’ve stripped millions from administration before turning to clinical cuts.”

There was broad consensus for the potential for artificial intelligence and robotic process automation to enable greater reliability and productivity at lower cost in areas such as billing, coding, and even some clinical functions—and that the pandemic will accelerate plans to implement these solutions.

On a more optimistic note, one executive shared that “relationships between clinicians and administrators have never been stronger. The pandemic has forced us to have difficult and constructive conversations we would have never had the courage to have before.”

Another noted the pandemic has spotlighted new leadership talent who might otherwise have been overlooked, and plans are now in place to formally recognize and retain newly crisis-tested talent for the work of restructuring the system.

On the whole, the discussion was far more upbeat that we had expected—as difficult as the crisis has been for many teams, the opportunity to rethink old ways of doing business seems to have created renewed enthusiasm even in the face of daunting financial and operational challenges ahead.

 

Google Health, the company’s newest product area, has ballooned to more than 500 employees

https://www.cnbc.com/2020/02/11/google-health-has-more-than-500-employees.html?utm_source=Sailthru&utm_medium=email&utm_campaign=Issue:%202020-02-12%20Healthcare%20Dive%20%5Bissue:25642%5D&utm_term=Healthcare%20Dive

Image result for google health

KEY POINTS
  • More than 500 people now work at Google Health, mostly out of the Palo Alto offices formerly occupied by smart home group Nest.
  • It’s led by former Geisinger CEO David Feinberg, who reports to Google AI chief Jeff Dean, and key players include Google veteran Paul Muret, who runs product, and Chief Health Officer Karen DeSalvo.
  • Former Nest CTO Yoky Matsuoka, who oversaw a small team under Feinberg looking at home-health monitoring, has left the company.

Google’s health care projects, which were once scattered across the company, are now starting to come together under one team now working out of the Palo Alto offices formerly occupied by Nest, Google’s smart home group, according to several current and former employees.

Google Health, which represents the first major new product area at Google since hardware, began to organize in 2018, and now numbers more than 500 people working under David Feinberg, who joined the company in early 2019. Most of these people were reassigned from other groups within Google, although the company has been hiring and currently has over a dozen open roles.

Google and its parent company, Alphabet, are counting on new businesses as growth slows in its core digital advertising business. Alphabet CEO Sundar Pichai, who was recently promoted from Google’s CEO to run the whole conglomerate, has said health care offers the biggest potential for Alphabet to use artificial intelligence to improve outcomes over the next 5 to ten years.

Google’s health efforts date back more than a decade to 2006, when it attempted to create a repository of health records and data. Back then, it aimed to connect doctors and hospitals and help consumers aggregate their medical data. However, those early attempts failed in market and the company terminated this first “Google Health” product in 2012. Google then spent several years developing artificial intelligence to analyze imaging scans and other patient documents and identify diseases with the intent of predicting outcomes and reducing costs. It also experimented with other ideas, like adding an option for people searching for medical information to talk to a doctor.

The new Google Health unit is exploring some new ideas, such as helping doctors search medical records and improving health-related Google search results for consumers, but primarily consolidates existing teams that have been working in health for a while.

Google’s not the only tech giant working on new efforts centered around the health industry. AmazonAppleFacebook and Microsoft have all ramped up efforts in recent years, and have been building out their own teams.

Who’s important at Google Health?

In just over a year under Feinberg’s leadership, Google Health has grown to more than 500 employees, according to the company’s internal directory and people familiar with the company. These people asked for anonymity as they’re not authorized to comment publicly about the company’s plans.

Many of these Google Health employees have come over from other groups, including Medical Brain, which involves using voice recognition software to help doctors take notes; and Deep Mind’s health division, which was folded into Google Health back in November of 2018 and has worked with the U.K.’s National Health System to alert doctors when patients are experiencing acute kidney injury.

The business model for Google Health is still a work in progress, but its leadership and organizational structure provided some clues as to the company’s areas of interest.

Feinberg is high up in Google’s internal org chart and has the ear of the top Google execs including Pichai. He reports to Jeff Dean, the company’s AI lead and one of its earliest employees.

Dean co-founded Google Brain in 2010, which catapulted the company’s deep learning technology into medical analysis. Some of the first health-related projects out of Google Brain included a new computer-based model to screen for signs of diabetic retinopathy in eye scans, and an algorithm to detect breast cancer in X-rays. In 2019, Dean took the helm of the company’s AI unit, reporting to Pichai.

Feinberg stood out in interviews for the job because he helped motivate Geisinger to start thinking more deeply about preventative health and not just treating the sick, according to people familiar with the hiring process. During his tenure at Geisinger, the hospital experimented with giving away healthy food to people with chronic conditions, including diabetes. It also pushed for more patients to have genetic tests to screen for diseases before it grew too late to treat them.

Feinberg works closely with Google Cloud CEO Thomas Kurian, who has named healthcare as one of biggest industry verticals for the business as it attempts to catch up with cloud front-runners Amazon and Microsoft.

Another key player at Google Health is Paul Muret, who had been an internal advocate for forming Google Health before Feinberg was hired, say two people who worked there. Muret is a veteran of the company who worked as a vice president of engineering for analytics, followed by video and apps. He’s now listed on LinkedIn as a product leader for “AI and Health,” and people in the organization say he’s in charge on the product side.

The company is now staffing up its team with health industry execs to show that it’s not just a group of Silicon Valley techies tinkering with artificial intelligence.

For instance, Feinberg helped recruit Karen DeSalvo as Google’s chief health officer. DeSalvo, who was the health commissioner of New Orleans, played a major role in rebuilding the city’s health systems in the wake of Hurricane Katrina. Like Feinberg, she’s been a big advocate of the idea that there’s more to health than just health care. She’s pushed for hospitals to consider whether patients have access to transportation services, healthy food and a support system before sending them home.

Google Health has also absorbed a small group from Nest that was looking into home-health monitoring, which would be particularly beneficial for seniors who are hoping to live independently. That group was led by former Nest CTO Yoky Matsuoka, sources say, but she recently left Alphabet, and has reportedly been working as a fellow at Panasonic. Matsuoka co-founded Google’s R&D arm, now called X, in 2011, and worked at Apple in between her stints at Google.

She’s not the only high-profile departure. A top business development leader, Virginia McFerran, who came from insurance giant UnitedHealth Group, has also left the company. To replace her, the team brought over Matt Klainer, a vice president from the consumer communications products group as its business development lead for Google Health.

Some health-related ‘Other Bets’ will remain separate

Google’s parent company, Alphabet, has a number of health-related “Other Bet” businesses that will remain independent from Google Health, including Verily, the life sciences group, and Calico, which is focused on aging.

Recently promoted Alphabet CEO Sundar Pichai stressed that the setup was intentional during the company’s most recent earnings call with investors, implying that Alphabet was not planning to consolidate all of its health efforts under one leader anytime soon.

“Our thesis has always been to apply these deep computer science capabilities across Google and our Other Bets to grow and develop into new areas,” noted Pichai, when describing the company’s work in health.

“The Alphabet structure allows us to have a portfolio of different businesses with different time horizons, without trying to stretch a single management team across different areas,” he continued.

 

The Presidential Campaign, Policy Issues and the Public

https://news.gallup.com/opinion/polling-matters/269717/presidential-campaign-policy-issues-public.aspx

The Presidential Campaign, Policy Issues and the Public

The U.S. presidential campaign is ultimately a connection between candidates and the people of the country, but the development of the candidates’ policies and positions is largely asymmetric. Candidates develop and announce “plans” and policy positions that reflect their (the candidates’) philosophical underpinnings and (presumably) deep thinking. The people then get to react and make their views known through polling and, ultimately, through voting.

Candidates by definition assume they have unique wisdom and are unusually qualified to determine what the government should do if they are elected (otherwise, they wouldn’t be running). That may be so, but the people of the country also have collective wisdom and on-the-ground qualifications to figure out what government should be doing. That makes it useful to focus on what the people are telling us, rather than focusing exclusively on the candidates’ pronouncements. I’m biased, because I spend most of my time studying the public’s opinions rather than what the candidates are saying. But hopefully most of us would agree that it is worthwhile to get the public’s views of what they want from their government squarely into the mix of our election-year discourse.

So here are four areas where my review of public opinion indicates the American public has clear direction for its elected officials.

1. Fixing Government Itself.

I’ve written about this more than any other topic this year. The data are clear that the American people are in general disgusted (even more than usual) with the way their government is working and perceive that government and elected leaders constitute the most important problem facing the nation today.

The people themselves may be faulted here because they are the ones who give cable news channels high ratings for hyperpartisan programming, keep ideological radio talk shows alive, click on emotionally charged partisan blogs, and vote in primaries for hyperpartisan candidates. But regardless of the people’s own complicity in the problem, there isn’t much doubt that the government’s legitimacy in the eyes of the people is now at a critically negative stage.

“Fixing government” is a big, complex proposition, of course, but we do have some direction from the people. While Americans may agree that debate and differences are part of our political system, there has historically been widespread agreement on the need for elected representatives to do more compromising. Additionally, Americans favor term limitsrestricting the amount of money candidates can spend in campaigns and shifting to a 100% federally funded campaign system. (Pew Research polling shows that most Americans say big donors have inordinate influence based on their contributions, and a January Gallup poll found that only 20% of Americans were satisfied with the nation’s campaign finance laws.) Americans say a third major party is needed to help remedy the inadequate job that the two major parties are doing of representing the people of the country. Available polling shows that Americans favor the Supreme Court’s putting limits on partisan gerrymandering.

Additionally, a majority of Americans favor abolishing the Electoral College by amending the Constitution to dictate that the candidate who gets the most popular votes be declared the winner of the presidential election (even though Americans who identify as Republicans have become less interested in this proposition in recent years because the Republican candidate has lost the popular vote but has won in the Electoral College in two of the past five elections).

 

2. Fix the Backbone of the Nation by Initiating a Massive Government Infrastructure Program.

I have written about this at some length. The public wants its government to initiate massive programs to fix the nation’s infrastructure. Leaders of both parties agree, but nothing gets done. The failure of the Congress and the president to agree on infrastructure legislation is a major indictment of the efficacy of our current system of representative government.

 

3. Pass More Legislation Relating Directly to Jobs.

Jobs are the key to economic wellbeing for most pre-retirement-age Americans. Unemployment is now at or near record lows, to be sure, but there are changes afoot. Most Americans say artificial intelligence will eliminate more jobs than it creates. The sustainability of jobs with reasonably high pay in an era when unionized jobs are declining and contract “gig” jobs are increasing is problematic. Our Gallup data over the years show clear majority approval for a number of ideas focused on jobs: providing tax incentives for companies to teach workers to acquire new skills; initiating new federal programs to increase U.S. manufacturing jobs; creating new tax incentives for small businesses and entrepreneurs who start new businesses; providing $5.5 billion in federal monies for job training programs that would create 1 million jobs for disadvantaged young Americans; and providing tax credits and incentives for companies that hire the long-term unemployed.

My read of the data is that the public generally will support almost any government effort to increase the availability of high-paying, permanent jobs.

 

4. Pass Legislation Dealing With All Aspects of Immigration.

Americans rate immigration as one of the top problems facing the nation today. The majority of Americans favor their elected representatives taking action that deals with all aspects of the situation — the regulation of who gets to come into the country in the first place and the issue of dealing with individuals who are already in the country illegally. As I summarized in a review of the data earlier this year: “Americans overwhelmingly favor protecting the border, although with skepticism about the need for new border walls. Americans also overwhelmingly favor approaches for allowing undocumented immigrants already living in the U.S. to stay here.”

Recent surveys by Pew Research also reinforce the view that Americans have multiple goals for their elected representatives when it comes to immigration: border security, dealing with immigrants already in the country, and taking in refugees affected by war and violence.

 

More Direction From the People

What else do the people want their elected representatives to do? The answer can be extremely involved (and complex), but there are several additional areas I can highlight where the data show clear majority support for government policy actions.

 

Americans See Healthcare and Education as Important but Don’t Have a Clear Mandate

There are two areas of life to which the public attaches high importance, but about which there is no clear agreement on what the government should be doing. One is healthcare, an issue that consistently appears near the top of the list of most important problems facing the nation, and obviously an issue of great concern to presidential candidates. But, as I recently summarized, “Healthcare is clearly a complex and often mysterious part of most Americans’ lives, and public opinion on the issue reflects this underlying messiness and complexity. Americans have mixed views about almost all aspects of the healthcare system and clearly have not yet come to a firm collective judgment on suggested reforms.”

Education is another high priority for Americans, but one where the federal government’s role in the eyes of the public isn’t totally clear. Both the American people and school superintendents agree on the critical importance of teachers, so I presume the public would welcome efforts by the federal government to make the teaching profession more attractive and more rewarding. Americans also most likely recognize that education is a key to the future of the job market in a time of growing transition from manual labor to knowledge work. But the failure of the federal government’s massive effort to get involved in education with the No Child Left Behind legislation underscores the complexities of exactly what the federal government should or should not be doing in education, historically a locally controlled part of our American society.