Palo Alto, Calif.-based Stanford Health Care is at an inflection point in how we think about applications, workflows, and the teams that support them. For years, the dominant narrative in health IT has been about cost containment: consolidating systems, reducing the number of applications, and squeezing more efficiency out of existing tools. That work is still important, but it’s no longer sufficient.
We are now entering a world where “agentic AI” — AI that can act as a semiautonomous co-worker — will fundamentally reshape how work gets done in healthcare.
At Stanford, we’re evolving our applications strategy to embrace this future, not as a technology project, but as an organizational transformation centered on people, workflows, and value. This article outlines how we’re preparing our applications team for an agentic AI world, how we’re rethinking our relationship with operational partners, and how we’re measuring success along the way.
Starting with agency: Preparing the applications team
Our first priority was not a tool or a platform; it was our people.
We brought our entire applications team together on-site for a full in-person day, after giving them prereads on different visions of AI in healthcare and beyond. We asked them to come prepared to discuss what they thought the future might look like and what it would mean for their work.
Through various discussions and exercises, our team converged on a shared vision: a future of agentic co-workers — AI agents that work alongside humans, not instead of them. In this model:
— AI agents take on more of the routine, repetitive, and rules-based work.
— Human analysts and operational leaders focus on higher-order problem solving, design, and oversight, based on a deep understanding of operational and clinicalworkflows.
— The boundary between “IT” and “operations” becomes more permeable, especially for lower-risk workflows, with the ability to move some design and execution outside of IT, with guardrails and oversight.
We also surfaced something important: a real and understandable bias against AI within the team, driven largely by fear about job security. We didn’t try to dismiss that fear. Instead, we spent time unpacking it.
The core message to the team was this:
If we don’t change, that’s a bigger threat to our jobs than if we embrace AI. Demand for digital solutions in healthcare is effectively insatiable. The constraint is our capacity to deliver. If we can use AI to increase that capacity — to remediate technical debt, to accelerate build and testing, to support operations more deeply through improved workflows — we become more valuable, not less.
That framing has been critical. We are not “automating away” the applications team. We are evolving their role from builders and maintainers of systems to co-designers and stewards of human-AI workflows.
Two parallel tracks: Fixing the past and designing the future
From that shared vision, we are organizing our work into two major tracks:
1. Using AI to remediate technical debt. Like every large health system, we have accumulated technical debt over time: legacy configurations, workarounds, and workflows that no longer reflect ideal care delivery. Historically, cleaning this up has been slow, manual work, often at the bottom of the priority list.
Agentic AI gives us a new set of tools to tackle this at scale. We’re exploring ways to:
— Use AI to analyze existing configurations and workflows to identify redundancy,inconsistency, and risk.
— Automate portions of build, configuration, and regression testing, especially forlower-risk changes.
— Deploy AI as a first-pass quality assurance layer, with human analysts performingpeer review and certification.
The goal is not to remove humans from the loop, but to change where and how they spend their time. Instead of manually doing every step, analysts can oversee, validate, and refine AI-generated work, allowing us to move faster and address more of our backlog. As the team gets more comfortable with the tools, we expect the value to further increase as we experiment and extend new ways to solve historical problems.
2. Codesigning agentic workflows with operations. The second track is outward-facing: helping our operational partners deploy agentic tools in their own workflows.
We’re shifting from a “tool request” mindset to a “problem-first” mindset. Rather than starting with, “I want this technology,” we’re asking:
— What are the most important problems you need to solve in your area?
— Where are the biggest pain points for clinicians, staff, or patients?
— What outcomes would define success? They would include increased throughput, faster response times, fewer errors and less rework, more time redirected to higher value tasks, and reduced cognitive load.
From there, we work together to map the workflow and decide:
— Which steps are best handled by an AI agent?
— Which steps must remain human-driven?
— Where do we need human-AI collaboration, with clear handoffs and oversight?
This requires our application analysts to be deeply embedded with operational workflows, not just as technical implementers but as partners in redesign. It also requires us to think about monitoring and continuous optimization from the start: how we will track performance, detect drift or failure, and iterate over time. And it requires shifting to more of a product mindset, where we start small and continually improve, instead of big bang fixes and then move onto the next problem.
Human-centered AI and permeable boundaries
A core principle guiding our strategy is “human-centered AI.” We are not deploying AI for its own sake. We are designing systems where humans and AI together produce better outcomes than either could alone.
This has several implications for how we structure work:
— Permeable team boundaries within end-to-end service-line teams: Historically, we’ve had tight boundaries around who can build what and who owns which domain. With AI handling more routine work and providing first-pass QA, these boundaries can be more permeable, especially for lower-risk build tasks and workflows. In practice, one product team might build all components, while other teams that would traditionally configure them perform peer review and validation rather than owning a specific domain.
— End-to-end, cross-functional product teams: Structure around end-to-end service lines that encompass all needed skill sets, while recognizing that tools enable individuals to contribute more. The teams collaborate across the product line to ensure all area-specific considerations are addressed, blurring lines between clinical and revenue-cycle build (for example).
— Peer review as a safety net with continuous monitoring: As AI handles more initial work, peer input remains essential. We envision a model where the human team defines the problem and desired outcome, then AI provides the first draft — a configuration, a test plan, or a workflow proposal — after which human experts validate, correct, and approve. The shift to ongoing monitoring of agentic workers is critical: we must monitor performance, provide feedback and adjustments, and extend or retire capabilities as value shifts. This requires a different management skillset — a blend of workflow understanding, people leadership, and the ability to guide and steward technological solutions.
— Role evolution, not role elimination: The analyst of the future is less a “ticket taker” and more a workflow architect and AI supervisor. They need to understand both the technology and the clinical or operational context deeply enough to design safe, effective human-AI systems.
Measuring success: From tools to outcomes
To know whether we’re succeeding, we’re shifting how we think about metrics. Historically, much of the conversation has been about tools: adoption rates, number of builds, number of tickets closed. Those still matter, but they’re not enough in an agentic AI world.
We’re moving toward measures that are:
— Problem- and outcome-based: Are we solving the top problems identified by a given area? Are we reducing time to complete a workflow, decreasing error rates, improving throughput, or reducing burnout?
— Value-focused: For each agentic workflow, what is the value delivered — to patients, clinicians, researchers, or the organization? Are we enabling higher-quality care, better access, or more efficient operations?
— Continuous and dynamic: Agentic systems are not “set and forget.” We need ongoing monitoring of performance, safety, and user experience, with the ability to adjust quickly as conditions change.
This also requires us to keep the broader mission in view. We can’t afford to have tunnel vision on any single workflow or metric. We always have to ask: How does this support the patient, the clinician, the researcher? Are we deploying our limited resources where they will have the greatest impact?
Looking ahead
A few years ago, the dominant levers available to health IT leaders were about consolidation and cost: fewer applications, more standardization, tighter budgets. Those levers are still in play, but they’re no longer the whole story.
Agentic AI gives us a new set of levers: the ability to scale our capacity, remediate technical debt more aggressively, and redesign workflows in ways that were not feasible before. But realizing that potential requires more than technology. It requires:
— A clear, shared vision of human-AI collaboration.
— Honest engagement with staff fears and biases.
— Deep partnership with operations around problems and outcomes.
— New approaches to governance, QA, and measurement.
At Stanford Health Care, we’re still early in this journey, but we’re committed to building an applications organization that is ready for — and helps shape — an agentic AI future.
Key takeaways
1. Start with people, not tools. Preparing teams for agentic AI begins with giving them agency in defining the future, surfacing fears and biases, and reframing AI as a way to increase their value, not replace them.
2. Run two tracks in parallel. Use AI to remediate technical debt and accelerate build/QA, while simultaneously co-designing agentic workflows with operations that are grounded in real problems and outcomes.
3. Design for human-AI collaboration. Treat AI as an agentic co-worker, with clear decisions about which steps are AI-driven, which are human-driven, and how peer review and ongoing oversight will work.
4. Make boundaries more permeable — safely. With AI handling more routine work and first-pass QA, teams can collaborate across traditional domain boundaries, especially for lower-risk workflows, while maintaining strong peer review.
5. Measure value, not just activity. Shift from tool-centric metrics to outcome-based measures tied to the top problems in each area — focusing on impact for patients, clinicians, and researchers, and continuously monitoring and refining agentic workflows over time.
The post Stanford Health Care’s applications strategy for an agentic AI future appeared first on Becker's Hospital Review | Healthcare News & Analysis.