As artificial intelligence becomes embedded in clinical care, healthcare leaders must reevaluate how liability, insurance, and governance frameworks intersect. EPIC Perspectives explores the emerging risk landscape and why the traditional rules of coverage may no longer apply.
The Quiet Infiltration of AI into Clinical Care
Artificial intelligence is no longer confined to pilots or research labs, it’s operating inside the delivery of care. Across hospital systems, AI is being used to transcribe clinical conversations, flag potential diagnoses, predict deterioration, support surgical navigation, and triage patients through conversational chatbots.
Health systems are rapidly integrating these tools in pursuit of greater accuracy, efficiency, and cost control. At institutions like Mass General Brigham, dozens of AI models are now running in production. Nationwide, adoption rates are accelerating, driven by clinical shortages, documentation burden, and a push for data-driven outcomes.
But as these systems scale, a new liability paradigm is emerging. When an AI tool makes a mistake—or influences a decision that leads to harm, where does accountability lie? And are healthcare organizations insured for what happens next?
When the Machine Gets It Wrong
AI-driven error is not a theoretical concern, it’s already happening.
In one widely reported case, a sepsis detection model implemented by multiple U.S. hospitals produced false alarms in the majority of cases while simultaneously failing to flag many true instances. Clinical staff, inundated with alerts, began to ignore the system. In at least one hospital, this breakdown reportedly contributed to delayed recognition and fatal consequences.
In another incident, an AI-enabled documentation tool incorrectly inserted a diagnosis into a patient’s chart. The clinician, operating under time constraints, failed to notice the error. The resulting cascade of administrative, billing, and clinical missteps triggered both regulatory exposure and the threat of legal action.
And in the payer space, Medicare Advantage plans have been named in federal lawsuits alleging that AI systems used to automate prior authorization decisions led to the denial of medically necessary care. These cases are beginning to test the liability boundaries of insurers themselves when AI is placed at the heart of benefit determinations.
Each of these scenarios triggers a different, but related, set of questions about what coverage applies when decisions are no longer made solely by humans.
What E&O Insurance Was Designed to Do
Errors & Omissions (E&O) insurance exists to cover financial loss or harm arising from mistakes in the performance of professional services. In healthcare, it traditionally applies to administrative functions, such as billing errors, credentialing mistakes, or delays in referral processing, rather than direct clinical judgment.
For hospitals and provider organizations, E&O is often packaged with broader professional liability coverage or structured as part of a layered risk program. Unlike medical malpractice insurance, which responds to bodily injury resulting from direct patient care, E&O addresses less tangible but equally damaging failures in service delivery or advice.
A typical E&O policy covers:
- Negligent performance of services (e.g., revenue cycle management errors)
- Failure to render services (e.g., missed follow-ups due to flawed workflow systems)
- Administrative missteps that result in financial harm to patients, insurers, or vendors
- Alleged breaches of professional duty tied to compliance, documentation, or decision-support failures
What it doesn’t cover, at least not always, is harm resulting from third-party AI tools embedded in clinical workflows. And that is becoming a problem.
Why Traditional E&O Is Being Stress-Tested by AI
As AI tools take on roles that straddle the line between technical infrastructure and clinical guidance, the triggers that historically defined E&O applicability are breaking down.
Consider an AI tool that auto-generates clinical documentation using ambient voice technology. If that documentation is inaccurate, and results in improper billing, an audit, or regulatory scrutiny, does the fault lie with the software, the physician, or the hospital that deployed it?
If the claim centers on overbilling tied to flawed documentation, the hospital may look to E&O. But the insurer may argue that the error arose from a technology failure (better suited to a tech E&O or product liability policy), or that the underlying service was clinical in nature (requiring malpractice coverage).
Now consider a risk model that suggests early discharge for a post-operative patient based on a flawed algorithm. If the patient suffers complications, the provider may face allegations of negligence. But because the decision was partially informed by AI, and partly by clinical interpretation, the claim may fall into a gray zone: too technical for malpractice, too clinical for tech E&O, and outside the bounds of traditional administrative E&O.
These edge cases are no longer rare. AI has become embedded in scheduling, intake, triage, diagnostics, and discharge. Many of these tools aren’t formally approved by the FDA and don’t qualify as regulated medical devices. Their role in care delivery is significant, but their insurance address is often undefined.
E&O Policy Language Is Not Keeping Up
At EPIC, we’ve reviewed dozens of healthcare E&O policies issued over the past five years. In most cases, policy language remains silent on artificial intelligence. Some contain general exclusions for technology-related failure. Others make ambiguous reference to “third-party services” or “tools not under the direct control of the insured.”
A handful of carriers have begun to introduce exclusions specifically targeting unregulated AI applications, particularly in the context of documentation and clinical decision support. Others have inserted endorsements clarifying that AI-induced errors must be tied directly to the insured’s services, not to the operation or failure of a software product.
This lack of consistency is creating uncertainty in claims management. Hospitals may believe they are covered, only to face delays, disputes, or denials when a loss arises. Insurers, meanwhile, are evaluating exposures that span malpractice, E&O, tech liability, and cyber, sometimes within a single event.
Until policy language evolves, risk managers are advised to assume that AI-related incidents may not be covered under existing E&O terms, unless affirmatively endorsed.
Evolving Products: eHealth E&O and Blended Coverage Models
In response to the convergence of technology and clinical risk, a small but growing number of carriers are offering specialized products designed to bridge these coverage gaps.
These include:
- Blended eHealth E&O Policies: These combine traditional healthcare E&O with tech E&O and cyber, allowing for coordinated coverage of incidents that span operational, software, and compliance fault lines.
- AI Liability Endorsements: Add-ons that extend professional liability coverage to include acts, errors, or omissions arising from approved AI tools, sometimes limited to tools that meet validation or certification thresholds.
- Vendor-Insured Contract Models: Risk transfer arrangements in which AI vendors are contractually required to carry their own E&O or product liability coverage, with indemnity clauses favoring the healthcare client.
These emerging structures reflect a recognition that the traditional “siloed” approach to liability, clinical, administrative, technological, no longer reflects how modern care is delivered. As software becomes co-author and co-decision maker in clinical settings, the lines between professional and technical error blur.
Looking Ahead: Toward a Purpose-Built Risk Architecture
Artificial intelligence is reshaping healthcare, quietly, quickly, and with profound implications for how risk is transferred, underwritten, and adjudicated.
Errors & Omissions insurance was built to address professional mistakes. But the definition of “professional” is changing. The emergence of semi-autonomous tools, ones that generate notes, suggest diagnoses, make predictions, and sometimes override human workflows, is testing the limits of every policy line.
Healthcare executives should not wait for litigation or regulation to force adaptation. The smart organizations are already aligning their insurance architecture with the realities of modern care: shared judgment, diffuse responsibility, and machine-human collaboration.
At EPIC, we believe this evolution is not just technical, it’s strategic. E&O, in this context, becomes not just a product, but a lens through which to evaluate governance, procurement, and institutional resilience.
The healthcare industry will con