Health systems are rapidly expanding their AI footprints, but leaders at some of the largest organizations told Becker’s that portfolio size matters less than what holds up under evaluation, scales across the enterprise and delivers provable clinical and operational value.

“We don’t view the number of AI applications as a success metric,” said Eric Goodwin, vice president and CIO of King of Prussia, Pa.-based Universal Health Services. “Having too many disconnected initiatives can create fragmentation, increase risk and dilute impact.”

Becker’s reached out to the 25 largest health systems by revenue to ask about the size and scope of their AI portfolios. Here are five that responded:

Advocate Health

Charlotte, N.C.-based Advocate Health has more than 100 AI use cases deployed at scale, but quantity isn’t a primary measure of success.

“We don’t manage to a number,” said Andy Crowder, senior vice president and chief digital and AI officer at Advocate Health. “There is no universal definition of what constitutes an AI ‘use case,’ which makes apples-to-apples comparisons across health systems inherently unreliable.”

The 69-hospital system uses a staged process from pilot validation through limited deployment before scaling, with clinical and operational oversight at each step.

“We manage to value,” Mr. Crowder said. “Every AI initiative must align to our strategy and demonstrate measurable impact — whether that’s reduced clinician burden, improved patient outcomes, faster throughput or financial return.”

One of the system’s most widely adopted applications is ambient clinical documentation. Advocate Health was an early adopter of the technology and now has more than 1,500 physicians and advanced practice providers using it. The tool has reduced after-hours charting for nearly half of users and cut documentation time by about 10%, contributing to improved clinician satisfaction and recognition from the American Medical Association.

The system has also deployed AI-assisted diagnostic imaging through a partnership with Aidoc, using 15 FDA-cleared algorithms to prioritize critical findings such as pulmonary embolisms. Advocate estimates that nearly 63,000 patients benefit annually from faster identification and diagnosis.

Not every application moves forward. Some early pilots performed well technically but failed to gain traction because they added friction to clinical workflows.

“A tool can be technically accurate and still fail if it doesn’t fit naturally into how clinicians work,” Mr. Crowder said. “We track performance, adoption and outcomes over time, and we’re willing to pull the plug when something isn’t delivering.”

UPMCAt Pittsburgh-based UPMC, over 30 AI applications are live across the system, with additional tools in active pilot. The focus is “definitely the quality, without a doubt,” said Chris Carmody, senior vice president and chief technology officer.

The 40-plus hospital system maintains a mix of deployed applications and pilots, with roughly one-third still being evaluated in targeted workflows before broader rollout. Many tools are narrowly focused, designed to solve specific clinical or operational challenges rather than serve as broad, enterprisewide platforms.

Each application undergoes security, architectural and governance review before deployment, with oversight from a centralized AI governance team that meets regularly to evaluate new use cases and monitor existing ones.

“Most of the time, the models work,” Mr. Carmody said. “It’s how you adopt it and integrate it into a workflow that determines whether it’s effective.”

Embedded tools within Epic, including generative AI features that draft responses to patient messages, have helped reduce administrative burden for clinicians.

One of the system’s most widely adopted tools is an AI scribe, now used by more than 2,000 clinicians. The application has significantly streamlined documentation workflows and improved the clinician experience.

“The feedback we’ve gotten is that clinicians don’t know how they did this work before,” Mr. Carmody said.

UPMC continues to evaluate dozens of additional use cases, but leaders are deliberate about advancing only those that demonstrate clear return on investment.

“We want to be quick and nimble and willing to change direction,” Mr. Carmody said. “We don’t want something lingering if it’s not solving our clinical or operational problems.”

Mayo Clinic

Rochester, Minn.-based Mayo Clinic has 466 AI algorithms deployed or in development across its organization.

“We are not focused on the number of AI solutions,” said Micky Tripathi, PhD, chief AI implementation officer at Mayo Clinic. “We believe that the greater risk to our patients is if we do not move fast enough in getting more AI solutions into the hands of our clinicians.”

Each clinical AI solution at Mayo Clinic must clear a formal review before moving to enterprise scale. Dr. Tripathi leads a department that evaluates each tool across clinical performance and adoption, patient safety, privacy, security and alignment with organizational goals.

The health system uses six defined stages to track each application: discovery and development, proof of concept, pilot, enterprise implementation, production and maintenance, and obsolescence or retirement.

Ambient documentation has reached wide adoption across the organization. A tool called RecordTime employs AI to catalog and index the high volume of outside records Mayo Clinic receives from other providers, reducing administrative burden for patients and clinicians. The Nurse Virtual Assistant, used by most Mayo Clinic nurses, automates information sharing during shift changes.

“Some successes are scaled broadly, and some are more applicable to specific areas, but both are important for patient care,” Dr. Tripathi said.

Mayo Clinic has also retired tools that didn’t hold up under scrutiny. The health system pulled an internally developed application that summarized medical record information from multiple internal sources after evaluation during scaling showed it didn’t meet the organization’s standards.Universal Health ServicesKing of Prussia, Pa.-based Universal Health Services has more than 50 AI applications live across its enterprise, spanning clinical care, revenue cycle and business operations. If counted by individual deployments across facilities or including pilots, the number would reach into the thousands.

UHS organizes its AI portfolio across four stages: proof of concept, pilot, limited deployment and enterprisewide implementation.

Each pilot is launched with defined success criteria, executive sponsorship and front-line engagement, with a focus on clinician efficiency, patient access, revenue cycle performance and quality outcomes.

“If a solution demonstrates repeatable results, integrates cleanly into workflows and earns trust from clinicians, then we move it toward broader deployment,” said Eric Goodwin, vice president and CIO of UHS.

The system, which has 29 hospitals and 346 inpatient behavioral health facilities, emphasizes centralized oversight to maintain visibility across its AI portfolio, helping avoid duplication and manage the pace of experimentation.

UHS has seen some of its strongest results in high-volume operational workflows. AI-driven revenue cycle automation has reduced manual work, accelerated claims processing and improved collections, with some initiatives reducing expenses by more than $1 million and driving revenue increases in the tens of millions.

In clinical settings, documentation tools such as ambient listening have helped reduce administrative burden and free up time for patient care.

“We focus on depth over breadth,” Mr. Goodwin said. “We want enough experimentation to innovate, but enough discipline to execute.”

Rather than retiring large-scale deployments, UHS takes an incremental approach to complex use cases, starting small and expanding over time as results are validated.NYU Langone Health

New York City-based NYU Langone Health has 120 AI models running across its organization, with 153 more in development.

“Quantity is a byproduct of value, meaning that AI application quantity is a proxy for the value that AI is bringing to the organization,” said Vincent Major, PhD, research associate professor of population health at NYU Langone Health.

Teams must identify how a tool will improve safety, enhance quality, elevate the patient experience or drive operational efficiency, with specific key performance indicators established at the outset.

The portfolio spans clinical, operational and research use cases. Epic models and ambient listening tools fall on the vendor side, while other applications are built in-house to address needs specific to NYU Langone Health that off-the-shelf products do not meet.

The path from pilot to scale depends on the tool’s risk profile. Lower-risk applications may move directly to broader rollout, while those requiring significant user engagement undergo structured pilots in coordination with physician and nursing informatics teams before expanding.

NYU Langone Health is leading trials to evaluate whether AI-generated discharge summaries improve patient comprehension and follow-through on aftercare instructions.

On the operational side, the health system handles about 12 million MyChart messages each month. A centralized nursing team reviews incoming messages alongside an AI prioritization model that flags those needing immediate attention, helping patients with urgent needs access care more quickly while ensuring all messages are addressed within an appropriate timeframe.

Not every AI tool reaches scale. NYU Langone Health’s governance process is designed to identify underperforming applications before wide deployment, and the system has retired models that didn’t validate as expected. One example is a COVID-19 risk stratification tool used by nursing teams during shift handoffs. The model was clinically well-regarded, but as COVID-19 shifted from a primary driver of hospitalization to a comorbidity, it became less relevant and was retired.

“We also have retired homegrown models in favor of vendor solutions for efficiency,” said Yindalon Aphinyanaphongs, MD, PhD, director of applied AI technologies.

The post ‘Depth over breadth’: Health systems eye quality of AI applications, not number appeared first on Becker's Hospital Review | Healthcare News & Analysis.