AI Governance Is Becoming an Accreditation Requirement

May 11, 2026

Comparing URAC’s Health Care AI Accreditation Standards to NCQA’s Proposed HPA 2027 AI Standards

Estimated read time: 5 minutes

Let’s be honest. “AI governance” can sound like something your IT team handles while the rest of you get back to accreditation prep. I’ve heard it. “We’re using it for scheduling and documentation—it’s not clinical.”

Here’s the thing: that line between “administrative” and “clinical” AI is thinner than most organizations realize. And right now, staff across healthcare settings are using unapproved tools—what’s being called “shadow AI”—outside of any organizational oversight whatsoever. Not because they’re trying to create problems. Because the tools are useful, they’re easy to access, and nobody told them not to. That creates real exposure: patient safety, data privacy, and organizational liability.

 At the same time, administrative work is one of the largest cost drivers in healthcare, and AI is being adopted at speed across documentation, coding, scheduling, utilization management support, and more. The train has left the station. The question now is whether your organization is governing it—or just hoping for the best. From all reports from many sectors, 2026 is a pivotal year for AI having rocket fuel. We’re blasted with reports about it every day. It’s here and it’s here to stay.

Both URAC and NCQA have answered that question with accreditation standards. URAC already has a Health Care AI Accreditation program. NCQA has proposed AI standards for Health Plan Accreditation 2027. They’re not identical—but they’re sending the same message: if you use AI in healthcare, you will be expected to prove governance, testing, monitoring, and accountability. Consistently. With documentation.

Let’s break down what each framework actually requires—and what that means for your organization.

URAC: Health Care AI Accreditation

URAC’s Health Care AI Accreditation is a stand-alone program—meaning it applies across risk management, operations, and performance monitoring regardless of what else you’re accredited for. It also makes an important distinction that NCQA’s current proposal does not: it separates requirements for AI developers (those who build or maintain AI tools) from AI users (those who implement and deploy them). If your organization falls into both categories, you’re addressing both sets of requirements.

Core Standards (Apply Broadly)

Whether you’re building AI or using it, URAC’s core standards establish the non-negotiable operational floor:

 Risk Management

  • Demonstrate regulatory compliance and internal controls supporting responsible AI operations
  • Maintain contracting controls for AI-related relationships, including defined agreements, scope, and oversight
  • Protect consumer information with privacy and security safeguards and documented risk assessment
  • Conduct risk analyses that address impact, scalability, technical risk, and business continuity planning

 Operations & Infrastructure

  • Maintain business management and operational policies and procedures that support consistent, ethical AI use
  • Implement staff management expectations including screening, training, and a code of ethical conduct
  • Establish leadership accountability that covers clinical, technical, and ethical leadership functions

 Performance Monitoring & Improvemen

  • Maintain a quality management program with defined structure, data collection, analysis, and improvement processes

Developer Requirements (If You Build or Maintain AI Tools)

URAC’s developer requirements are lifecycle-oriented—think of them as governing AI from inception to ongoing maintenance:

  • An AI management plan, plus evidence of annual evaluation outcomes
  • Build, training, and data governance practices that support safe development and maintenance
  • Testing requirements including pre-deployment testing, validation, and addressing model drift and false findings
  • Disclosures clarifying intended use, ethical development approach, data features, testing transparency, and performance limitations

 User Requirements (If You Implement or Deploy AI Tools)

URAC’s user requirements emphasize disciplined adoption and ongoing oversight—not just a one-time implementation:

  • A user management plan, plus evidence of annual evaluation outcomes
  • Testing and monitoring in the real user setting, including verifying applicability to the population being served
  • User training on intended use, population fit, and how to interpret outputs appropriately
  • Responsible use assessments to confirm appropriate application over time
  • Disclosure procedures and documentation of the impact of AI use

 MHR lens: URAC’s framework pushes organizations to operationalize governance, contracting, privacy and security protections, lifecycle testing, monitoring, and disclosures—and to prove it with an accreditation-ready evidence trail. The accreditation team will need to heavily rely on IT to meet these requirements and translate URAC-speak into IT-speak for them to understand what is needed. All requirements will take quite a while to reach and will require many working meetings discussing the standards, expectations, and timing.

NCQA: Proposed AI Standards for Health Plan Accreditation 2027

NCQA’s approach is different in structure but aligned in intent. Rather than a stand-alone program, NCQA is embedding AI oversight directly into Health Plan Accreditation through four proposed standards—AI 1 through AI 4—that address program structure, governance, pre-deployment evaluation, and ongoing monitoring. NCQA has also signaled a phased approach: initially focusing on evidence submission and readiness, then evolving toward more formal scoring thresholds as the market matures.

Here’s what each proposed standard is asking for:

AI Program Structure (policies, risk assessment, monitoring, accountability)

NCQA expects health plans to define an AI program structure and oversight process, assign clear accountability, and establish monitoring expectations.

That includes:

  • A documented risk assessment methodology covering transparency and explainability, bias risk, potential for harm, privacy and security vulnerabilities, and the nature of vendor collaboration
  • Risk mitigation planning with safeguards, corrective actions, contingency planning if AI becomes unavailable, and defined triggers for updates
  • Monitoring processes tied to risk level, including attention to bias, drift, errors, ethical concerns, override rates, performance drops, and security events
  • Governing body visibility into performance and error reporting, with senior leadership involvement
  • A defined approach to communicating errors that could affect care or care choices

AI Governance (formal governing body + structured review)

NCQA proposes a formal AI governing body—or an existing body with AI governance explicitly in scope—that reviews and oversees AI technologies and their use cases.

Expectations include:

  • Pre-deployment review and approval of AI technology, use cases, and workflows
  • Routine review of performance and error reporting
  • An escalation model for critical incidents that triggers rapid governance review under defined circumstances, including harm or near-harm events, systemic integrity issues, or formal ethical and safety reports

AI Pre-Deployment Evaluation (local testing + readiness + risk assessment)

This is where NCQA gets specific about readiness. Organizations must demonstrate validation prior to deployment—not just intent:

  • Defined performance metrics and a feedback loop to report errors
  • Testing in a live environment using localized data sets to confirm performance in the plan’s operating context
  • Documentation that bias, ethical concerns, and privacy and security risks were assessed and mitigated before go-live

 Ongoing Monitoring and Interventions (continuous oversight + corrective action)

Deployment isn’t the finish line—it’s the starting line. NCQA’s proposed monitoring standard focuses on:

  • Continuous evaluation against predefined metrics and monitoring for errors
  • Revalidation after meaningful modifications to the AI tool or changes that could affect performance, such as workflow changes or policy and guideline updates
  • Root cause analysis and documented corrective actions when issues are identified, including evidence of follow-through

 MHR lens: NCQA is shaping an “AI quality improvement” approach for health plans—documented structure, governing body oversight, pre-deployment readiness, and monitoring with clear escalation and response expectations. If that sounds like your existing QI framework applied to AI, you’re right. That’s intentional. And the governing body must be used as they have a fiduciary responsibility for risk in the organization – and AI can be very high risk with reputational damage due to data leaks and HIPAA and HITECH fines for PHI and PII leaks. Best practices that I’ve heard include all of what NCQA is recommending. It’s a great deal of work – but the stakes are HIGH!

Where URAC and NCQA Align—and Where They Differ

Before you start building two separate programs, here’s the good news: both frameworks share a common governance spine. Understanding where they converge—and where they diverge—will save you significant time and rework.

Strong Alignment

Both URAC and NCQA emphasize:

  • Formal governance with accountable leadership and documentation
  • Pre-deployment evaluation and testing, plus ongoing monitoring for performance degradation and drift
  • Training and operational readiness as core controls
  • Transparency through disclosures, error handling, and documented oversight

Key Differences That Will Affect Your Implementation

Scope and audience: URAC explicitly addresses both developers and deployers of AI. NCQA’s proposed standards are embedded in Health Plan Accreditation and focus specifically on how a plan governs AI within plan operations and covered functions.

Contracting emphasis: URAC is explicit about AI contracting controls and organizational risk management expectations. NCQA’s requirements tend to be oriented toward accreditation evidence of governance and performance oversight rather than prescribing contracting architecture in the same way.

Member notification: NCQA’s proposal more directly ties governance to member impact, including expectations for handling and communicating errors that affect care or care choices.

Incident escalation specificity: NCQA provides a more explicit posture around incident triggers and rapid governance review.

What MHR Recommends You Do Now

Here’s the practical reality: neither framework rewards organizations that wait until survey season to figure out AI governance. The evidence trail has to be built over time—policies, governing body minutes, test results, monitoring reports, incidents, corrective actions. You cannot manufacture that retroactively.

Whether you’re preparing for URAC’s Health Care AI Accreditation, NCQA’s proposed 2027 AI standards, or both, these eight steps will get you moving in the right direction:

1. Build an AI inventory. Document every AI tool in use—including embedded features in vendor products—and map where each one affects decisions, workflows, or member interactions.

2. Classify AI use cases by risk. Tie each tier to monitoring frequency and escalation pathways. Not all AI is equal—your oversight approach shouldn’t be either.

3. Stand up (or expand) an AI governing body. It needs authority, a defined cadence, and documented incident triggers. A committee that meets once a year and rubber-stamps decisions is not going to meet these requirements.

4. Require pre-deployment readiness evidence. Localized testing, defined performance metrics, and an error feedback loop—before go-live, not after.

5. Operationalize monitoring and revalidation. Treat AI as a living process, not a one-time implementation. Meaningful changes to the tool or the workflow trigger revalidation—build that expectation into your program now.

6. Tighten vendor governance and contracting. Intended use, limitations, data handling, testing transparency, and shared accountability need to be explicit in your agreements—not assumed.

7. Train staff and reinforce human oversight. Especially where “administrative” processes affect clinical outcomes, access, or member experience. Shadow AI thrives where training and clear expectations are absent.

8. Build an audit-ready evidence binder. Policies, governing body minutes, test results, monitoring reports, incidents, corrective actions, and disclosures. If it isn’t documented, it didn’t happen.

 Standards Aren’t Extra Work—They’re Risk Control

I’ve been doing this long enough to remember when utilization management governance felt like “extra work.” Then it became table stakes. AI governance is on the same trajectory—just moving faster.

Shadow AI and rapid automation are outpacing most policy frameworks right now. URAC and NCQA are both signaling the same thing: organizations that use AI in healthcare will be expected to prove governance, testing, monitoring, and accountability. Consistently. With documentation. The standards aren’t creating the obligation—they’re formalizing one that already exists the moment you put AI into a workflow that touches members, care decisions, or clinical data.

The organizations that will navigate this well are the ones building the infrastructure now—not scrambling to create it when a surveyor asks for evidence.

If you’re not sure where your organization stands, that’s exactly the kind of question MHR can help you answer.  

MHR Support

Whether you’re preparing for URAC’s Health Care AI Accreditation, tracking NCQA’s proposed 2027 standards, or trying to understand what AI governance actually needs to look like inside your organization—MHR’s consultants are here to help you build it right the first time.

👉Download the AI Governance Readiness Bundle and start identifying governance gaps, documentation risks, and oversight blind spots before they become larger operational problem

👉 Schedule a Discovery Call: managedhealthcareresources.com/contact

👉 Reach us at: [email protected]

Learn more at managedhealthcareresources.com and follow us on LinkedIn for updates.

Driving healthcare quality one NCQA and URAC accreditation at a time.

 


Stay Informed

Join our mailing list to receive our blogs and newsletters with expert tips, proven strategies and valuable insights.
Contact Email  *
First Name *
Last Name *
Company Name 
*Required Fields
Note: It is our responsibility to protect your privacy and we guarantee that your data will be completely confidential.