How we assess governance maturity. Every domain. Every score.

Transparent methodology means you can explain every result to your board, your auditors, and your clinical leadership.

Foundation

Built on existing standards, not invented from scratch.

SourceWhat it informsLimitations
NIST AI RMFRisk identification, measurement, and mitigation framework. Shapes our Risk & Compliance domain questions.Sector-agnostic. Does not address healthcare-specific regulatory context (HIPAA, 42 CFR Part 2).
ONC HTI-1 RuleTransparency and decision support requirements for health IT. Informs Vendor & Technology Management scoring.Focused on certified health IT modules. Does not cover non-certified AI tools or ambient AI.
WHO AI Ethics GuidanceEthical principles for AI in health: autonomy, equity, transparency, accountability. Shapes Clinical Integration domain.Principles-based guidance, not operational controls. Must be translated to actionable governance steps.
AHA/AMA AI GuidelinesClinical AI oversight models, physician-in-the-loop requirements, liability considerations.Oriented toward hospital systems. May not map directly to FQHC/safety-net operational models.
HRSA HCCN RequirementsHealth center network governance expectations. Contextualizes multi-site governance and network-level oversight.Does not yet include AI-specific requirements. Our assessment anticipates where requirements are heading.
CMS Conditions of ParticipationQuality and safety standards that AI governance must align with. Informs compliance scoring.AI not yet explicitly addressed in CoPs. Governance mapping is anticipatory.

Assessment framework

Five domains. 37 questions.

Each domain captures a distinct dimension of AI governance. Together they produce a maturity profile that shows where your organization is strong and where gaps create risk.

Governance Structure

Does your organization have formal policies, committees, and accountability structures for AI oversight? Who approves new AI tools? Who reviews them after deployment?

8q

Vendor & Technology Management

How do you evaluate, procure, and monitor AI-enabled tools and their vendors? Do you have transparency requirements in contracts? Do you track which tools use AI?

8q

Risk & Compliance

How do you identify, assess, and mitigate risks from AI systems? Do you monitor for bias, drift, and safety issues? How does AI governance integrate with existing compliance programs?

7q

Clinical Integration

How is AI integrated into clinical workflows? Do clinicians understand when AI informs their tools? Is there a process for clinical validation and override?

7q

Strategy & Leadership

Does leadership understand AI's role in the organization? Is there a strategic vision for AI that connects to organizational mission? Is the workforce prepared?

7q

Scoring

Five-level maturity model

Each question is scored 1–5. Domain scores are the mean of their questions. The overall maturity score is the mean across all five domains. This produces a score from 1.0 to 5.0 mapped to five maturity levels.

1.0 – 1.4

No Governance

No formal AI policies, no oversight structure, no inventory of AI tools in use.

1.5 – 2.4

Ad Hoc

Some awareness of AI in the organization. Individual departments may have informal practices, but nothing standardized.

2.5 – 3.4

Emerging

Governance is being developed. Policies may exist in draft. Some oversight is in place but not consistently applied.

3.5 – 4.4

Structured

Formal governance framework in operation. Policies are documented, committees meet regularly, compliance is monitored.

4.5 – 5.0

Mature

Governance is embedded in operations. Continuous improvement processes, regular audits, board-level reporting, staff training.

Why equal weighting

All five domains are weighted equally in the overall score. We considered differential weighting (e.g., weighting Risk & Compliance higher) but found that it introduced subjective bias about which domain “matters more”—a judgment that varies by organization type, size, and regulatory exposure. Equal weighting lets the domain-level breakdown tell the nuanced story.

Output

What you get and how to use it

Your assessment produces three outputs: an overall maturity score, a domain-level breakdown showing relative strengths and gaps, and a set of priority recommendations based on your lowest-scoring areas.

The domain breakdown is often more valuable than the overall score. An organization scoring 3.8 overall might have a 4.5 in Governance Structure but a 2.1 in Clinical Integration—revealing exactly where to focus.

Benchmarking

As we collect assessment data across community health organizations, we are building anonymized benchmarks by organization type, size, and region. These benchmarks will allow you to see how your governance maturity compares to peer organizations—not just where you are, but where your peers are.

Explicit gaps

What we don't know.

Honest methodology requires naming the gaps. These are the areas where our assessment relies on assumptions or where the evidence base is still developing.

  • No regulatory standard yet. There is no HRSA requirement, CMS condition of participation, or federal rule that mandates AI governance for health centers. Our assessment anticipates where requirements are heading based on regulatory signals, but the ground truth is still forming.
  • Self-reported data. Assessment responses reflect the respondent's perception of their organization's practices, which may differ from actual implementation. We do not audit or verify responses.
  • Single-respondent perspective. One person fills out the assessment. An IT director and a CMO at the same organization might score differently. We recommend having multiple leaders complete the assessment independently.
  • Rapidly evolving field. AI capabilities, vendor practices, and regulatory expectations are changing faster than any assessment can track. Scores should be treated as a point-in-time snapshot, not a permanent rating.
  • Limited peer benchmarks. Until we have sufficient assessment volume across organization types, benchmark comparisons are preliminary. We are transparent about sample sizes when presenting benchmarks.

See where you stand.

The assessment takes about 15 minutes. You'll get your maturity score, domain breakdown, and priority recommendations immediately.

Take the Assessment

Legal disclaimer

LumenHealth provides AI governance assessments for informational and planning purposes only. Assessment results are not compliance certifications, legal opinions, regulatory audit findings, or accreditation determinations. Scores reflect the information provided by the respondent and our current understanding of governance best practices. All governance, compliance, and technology decisions should be made in consultation with qualified legal counsel, compliance professionals, and technology advisors. LumenHealth assumes no liability for decisions made based on assessment results.