https://www.linkedin.com/pulse/lifesaving-potential-openais-gpt-4o-update-robert-pearl-m-d–ngrmc/

Generative AI tools have made remarkable strides in medicine since the launch of ChatGPT in late 2022. Research has shown that AI, with expert clinician oversight, can significantly enhance diagnostic accuracy, treatment recommendations, and patient monitoring and analysis.
And yet, despite its impressive capabilities and buzz, generative AI is still in the early stages of adoption—both in U.S. healthcare and society.
While almost everyone has heard of genAI, less than a quarter of Americans use it regularly in their personal or professional lives. OpenAI’s newest update, GPT-4o, aims to change that.
In demos released during its spring update, OpenAI showed users engaged in natural, human-like conversations with GPT-4o. The AI interacted with people on their smartphones across video, audio and text, offering real-time spoken responses that sounded eerily human.
In the demo above, AI’s instant answers and friendly voice closely mimic the pace and inflection of normal dialogue. Not coincidently, GPT-4o’s voice sounded remarkably like Scarlett Johansson’s AI character in the movie Her (a decision OpenAI later walked back “out of respect”).
Regardless of the voice coming out of it, GPT-4o is at once awe-inspiring and unsettling. It also represents a significant departure from tech-industry norms. Most tech companies have long avoided creating AI “companions” because of ethical concerns, fearing people could form addictions that exacerbate isolation and loneliness.
What Will GPT-4o’s Rule-Breaking Mean For Medicine?
Critics point out that OpenAI and its peers have yet to resolve a host of major “trust” issues. These include accuracy, privacy, security, bias and misinformation. Of course, these will need to be resolved.
But by creating an AI experience that feels more like talking to a friend, or potentially a doctor, OpenAI has already leapt the tallest hurdle to mass acceptance and adoption. The company understands that humanizing GPT-4o—making it easier and more enjoyable to use—is essential for attracting a wide array of users, including the “late majority” and “laggards” described in Geoffrey Moore’s seminal 1991 book Crossing the Chasm.
Today, 70% of genAI’s non-users are Gen X (ages 44-59) and Baby Boomers (60-78). These generations, which comprise 136 million people, strongly prefer voice and video technologies to typing or touchscreens, and greatly prefer “conversational” AI apps to text-only ones.
They also make up the overwhelming majority of Americans with chronic diseases like diabetes, heart failure and cancer.
GenAI: From Mass Adoption To Mass Empowerment
Once consumers in their 50s, 60s and 70s become comfortable using GPT-4o for everyday tasks, they will then start to rely on it for medical inquiries, too. In a healthcare context, using GPT-4o will closely resemble a video visit or a phone call with a medical professional—two modalities that satisfy the majority of older patients. In fact, 93% of adults over age 70 say they value having telehealth as an option.
With broad adoption, GPT-4o (which will be embedded in next generations of ChatGPT) will empower the sickest Americans to take greater control of their own health, preventing up to hundreds of thousands of premature deaths each year from the complications of chronic disease: heart attacks, strokes, cancer and kidney failure. According to the Centers for Disease Control and Prevention, the effective management of chronic illness would reduce these complications by 30% to 50%, with a similar reduction in mortality.
Generative AI technology contains both the knowledge and ability to help accomplish this:
- Knowledge. ChatGPT houses an extensive corpus of scientific literature, which includes a diverse and extensive dataset of clinical studies, guidelines from professional medical organizations and research published in top-tier medical journals. In the future, it will be updated with real-time data from medical conferences, health records and up-to-the-minute research, ensuring the AI’s knowledgebase remains both comprehensive and current.
- Ability. To assist overburdened clinicians, genAI can provide patients with round-the-clock monitoring, insights and advice—empowering them to better diagnose and manage their own health problems. Future generations of these tools will connect with monitoring devices, informing patients about their health status and suggesting medication adjustments or lifestyle changes in clear and friendly terms. These tools will also remind people about preventive screenings and even facilitate testing appointments and transportation. These proactive approaches can reduce complications and improve health outcomes for the 130 million Americans living with chronic diseases.
Combatting Chronic Disease With GPT-4o
To dive deeper into genAI’s difference-making potential, let’s look at two major gaps in chronic disease management: diabetes and hypertension.
Diabetes is the leading cause of kidney failure, a major contributor to heart attacks and responsible for 80% of lower limb amputations. Effective management is possible for nearly all patients and would prevent many of these complications. Yet diabetes is well controlled in only 30% of cases across the United States.
Similarly, effective control of high blood pressure—the leading cause of strokes and a major contributor to kidney failure and heart attacks—is achieved only 55% to 60% of the time. Although some health systems achieve control levels above 90%, the best-available tools and approaches are inconsistently deployed throughout medical practices.
Medical monitoring devices plus AI could play a crucial role in managing hypertension. Imagine a scenario in which a doctor prescribes medication for hypertension and sends the patient home with a wearable device to monitor progress. After a month, the patient has 100 readings—90 normal and 10 elevated. The patient is unsure whether the 90 normal readings indicate all is well or if the 10 elevated ones signal a major problem. The doctor doesn’t have time to review all 100 readings and prefers not to clutter the electronic health record with this data. Instead of the patient waiting four months for the next visit to find out if all is well or not, a generative AI tool could quickly analyze the data (using the doctor’s instructions) and advise whether a medication adjustment is needed or to continue as is.
Today’s generative AI tools aren’t ready to transform medical monitoring or care delivery, but their time is coming. With the technology doubling in power each year, these tools will be 32 times more capable in five years.
Overcoming Barriers To Mass Adoption
Concerns about AI privacy, security and misinformation need to be solved before the majority of Americans will buy in to an AI-empowered future. Progress is being made on those fronts. For example, the leap from GPT-3.5 to GPT-4 saw an 82% reduction in hallucinations, a larger context window and better safety mechanisms.
In addition, clinicians worry about potential income loss if AI leads to healthier patients and reduced demand for medical services. The best solution is to shift from the current fee-for-service reimbursement model (which rewards the volume of medical services) to a value-based, capitated model. This system rewards doctors for preventing chronic diseases and avoiding their most serious complications, rather than simply treating life-threatening medical problems when they arise.
By adopting a pay-for-value approach, medical professionals will embrace genAI as a tool to help prevent and manage diseases (rather than seeing it as a threat to their livelihoods).
The release of GPT-4o shattered the industry norm against creating human-like AI, introducing ethical risks that must be carefully managed. However, the potential for genAI to save thousands of lives each year makes this risk worth taking.










CAMBRIDGE – Aristotle was right. Humans have never been atomized individuals, but rather social beings whose every decision affects other people. And now the COVID-19 pandemic is driving home this fundamental point: each of us is morally responsible for the infection risks we pose to others through our own behavior.
In fact, this pandemic is just one of many collective-action problems facing humankind, including climate change, catastrophic biodiversity loss, antimicrobial resistance, nuclear tensions fueled by escalating geopolitical uncertainty, and even potential threats such as a collision with an asteroid.
As the pandemic has demonstrated, however, it is not these existential dangers, but rather everyday economic activities, that reveal the collective, connected character of modern life beneath the individualist façade of rights and contracts.
Those of us in white-collar jobs who are able to work from home and swap sourdough tips are more dependent than we perhaps realized on previously invisible essential workers, such as hospital cleaners and medics, supermarket staff, parcel couriers, and telecoms technicians who maintain our connectivity.
Similarly, manufacturers of new essentials such as face masks and chemical reagents depend on imports from the other side of the world. And many people who are ill, self-isolating, or suddenly unemployed depend on the kindness of neighbors, friends, and strangers to get by.
The sudden stop to economic activity underscores a truth about the modern, interconnected economy: what affects some parts substantially affects the whole. This web of linkages is therefore a vulnerability when disrupted. But it is also a strength, because it shows once again how the division of labor makes everyone better off, exactly as Adam Smith pointed out over two centuries ago.
Today’s transformative digital technologies are dramatically increasing such social spillovers, and not only because they underpin sophisticated logistics networks and just-in-time supply chains. The very nature of the digital economy means that each of our individual choices will affect many other people.
Consider the question of data, which has become even more salient today because of the policy debate about whether digital contact-tracing apps can help the economy to emerge from lockdown faster.
This approach will be effective only if a high enough proportion of the population uses the same app and shares the data it gathers. And, as the Ada Lovelace Institute points out in a thoughtful report, that will depend on whether people regard the app as trustworthy and are sure that using it will help them. No app will be effective if people are unwilling to provide “their” data to governments rolling out the system. If I decide to withhold information about my movements and contacts, this would adversely affect everyone.
Yet, while much information certainly should remain private, data about individuals is only rarely “personal,” in the sense that it is only about them. Indeed, very little data with useful information content concerns a single individual; it is the context – whether population data, location, or the activities of others – that gives it value.
Most commentators recognize that privacy and trust must be balanced with the need to fill the huge gaps in our knowledge about COVID-19. But the balance is tipping toward the latter. In the current circumstances, the collective goal outweighs individual preferences.
But the current emergency is only an acute symptom of increasing interdependence. Underlying it is the steady shift from an economy in which the classical assumptions of diminishing or constant returns to scale hold true to one in which there are increasing returns to scale almost everywhere.
In the conventional framework, adding a unit of input (capital and labor) produces a smaller or (at best) the same increment to output. For an economy based on agriculture and manufacturing, this was a reasonable assumption.
But much of today’s economy is characterized by increasing returns, with bigger firms doing ever better. The network effects that drive the growth of digital platforms are one example of this. And because most sectors of the economy have high upfront costs, bigger producers face lower unit costs.
One important source of increasing returns is the extensive experience-based know-how needed in high-value activities such as software design, architecture, and advanced manufacturing. Such returns not only favor incumbents, but also mean that choices by individual producers and consumers have spillover effects on others.
The pervasiveness of increasing returns to scale, and spillovers more generally, has been surprisingly slow to influence policy choices, even though economists have been focusing on the phenomenon for many years now. The COVID-19 pandemic may make it harder to ignore.
Just as a spider’s web crumples when a few strands are broken, so the pandemic has highlighted the risks arising from our economic interdependence. And now California and Georgia, Germany and Italy, and China and the United States need each other to recover and rebuild. No one should waste time yearning for an unsustainable fantasy.