Most health center AI governance falls into one of two failure modes. Either there is no framework at all — AI tools get adopted informally, one department at a time, with no oversight until something goes wrong — or someone downloaded an enterprise governance template from a large health system, adopted it wholesale, and now every AI use case sits in a review queue that never moves.
Both fail. They fail for the same reason: they treat AI governance as a binary problem. You either have it or you do not. In reality, governance is a calibration problem. And most organizations get the calibration wrong.
The Template Trap
Here is what typically happens. A health center CEO reads an AHA brief or attends a conference session on AI governance. They come back and tell their compliance director to "put together an AI governance framework." The compliance director finds a template — often modeled on large academic medical center committees — and adopts it.
That template usually includes a standing AI governance committee, a use case intake form, a risk assessment rubric, a review and approval process, and quarterly board reporting. On paper, it looks thorough. In practice, it creates a single pipeline through which every AI use case must pass, from a chatbot answering patient portal questions to a clinical decision support tool recommending medication adjustments.
The chatbot and the medication tool have radically different risk profiles. They do not belong in the same review pipeline. But the template treats them identically, because the template was designed for an organization with a dedicated AI team, legal staff, and a standing committee that meets monthly. A 15-provider FQHC does not have any of those things.
The result: the committee meets twice, reviews one use case, gets bogged down in process questions, and quietly stops meeting. The framework exists on paper. Nothing flows through it. Meanwhile, front-line staff adopt AI tools anyway — because the work demands it and the governance process offers no realistic path to approval.
The Opposite Failure
The no-governance approach is more common and more dangerous. In this mode, AI adoption happens organically. A billing manager starts using an AI coding assistant. A nurse uses ChatGPT to draft patient education materials. A quality director experiments with an AI tool for HEDIS measure gap analysis.
None of these are inherently reckless. Some are genuinely useful. But without any intake or classification process, the organization cannot distinguish between low-risk productivity tools and high-risk clinical applications. It cannot track what AI is in use, where patient data might be flowing, or whether any of these tools meet HIPAA requirements for business associate agreements.
When the compliance survey arrives — or worse, when an incident occurs — the organization discovers it has been running AI in production with no documentation, no risk assessment, and no oversight structure. The liability exposure is real. The reputational risk is real. And the remediation is significantly harder than building the framework would have been.
What Calibrated Governance Actually Looks Like
The CHAI (Coalition for Health AI) framework and AMIA position statements converge on a principle that most health centers miss: not all AI needs the same level of governance. The skill is in building a tiered system that matches oversight intensity to actual risk.
A calibrated framework has three tiers, not one pipeline.
Tier 1: Expedited approval
Administrative and operational AI that does not touch clinical decisions or protected health information. Examples: scheduling optimization, supply chain forecasting, staff productivity tools, marketing content generation.
These need a lightweight review — confirm no PHI exposure, verify the vendor's security posture, document the use case — and move on. A compliance director can approve these in a week, not a quarter. Forcing these through a full committee review is how you kill adoption of tools that carry negligible risk.
Tier 2: Standard committee review
Clinical-adjacent and PHI-touching AI that informs but does not determine clinical decisions. Examples: population health analytics, HEDIS gap identification, revenue cycle AI, patient communication tools that handle PHI, ambient documentation.
These need the full review: risk assessment, BAA verification, clinical workflow analysis, bias evaluation, and committee sign-off. This is where the governance committee earns its keep. The review should be thorough but time-bound — 30 to 60 days, not indefinite.
Tier 3: Enhanced review with board reporting
Clinical decision support and diagnostic AI that directly influences patient care. Examples: sepsis prediction models, medication interaction alerts driven by AI, diagnostic imaging AI, treatment recommendation engines.
These need everything in Tier 2 plus ongoing monitoring, outcome validation, bias auditing against your specific patient population, and regular board-level reporting. These are the use cases where governance failures cause patient harm. They deserve the heaviest process — and they are the only ones that do.
The Risk Classification That Matters
Most governance templates include a risk matrix. Most of those matrices are wrong for community health organizations because they were built for large health systems with different risk profiles.
A community health center's AI risk classification should weight four factors:
- Patient safety impact. Does this tool influence clinical decisions? Could a failure or bias directly harm a patient? This is the primary axis.
- PHI exposure. Does the tool process, store, or transmit protected health information? If yes, BAA and security review are non-negotiable regardless of the use case.
- Population bias risk. Community health organizations serve populations that are systematically underrepresented in AI training data — Medicaid patients, tribal communities, rural populations, non-English speakers. An AI tool validated on commercial insurance populations may perform differently on yours. This factor is often missing from enterprise templates.
- Operational dependency. If this tool fails or produces bad output, what breaks? A scheduling tool that goes down is an inconvenience. A clinical decision support tool that produces incorrect recommendations is a patient safety event.
Weight these factors and you get a risk score that maps directly to your three tiers. No ambiguity about which pipeline a use case enters. No committee debates about whether a billing AI needs the same review as a diagnostic tool.
Escalation Paths and the Blocking Decision
The hardest governance decision is not approving a use case. It is blocking one.
A governance framework that cannot say no is not governance — it is documentation. But blocking has a cost. Every AI use case that gets stuck in review or rejected outright is a productivity gain your organization does not capture, a quality improvement that does not happen, a competitive gap that widens.
Calibrated governance makes the blocking criteria explicit. A use case gets blocked when:
- It touches clinical decisions and the vendor cannot demonstrate validation on a population comparable to yours
- It processes PHI and the vendor will not sign a BAA
- It creates operational dependency without a documented fallback process
- The bias risk is unquantified and the vendor cannot provide demographic performance data
Everything else gets a tier assignment and moves through the appropriate pipeline. The committee's job is not to evaluate every tool. It is to ensure the right tools get the right level of scrutiny — and that the low-risk tools get out of the way.
Governance Maturity Is Not a Checkbox
The organizations that get this right treat governance as an operational capability, not a compliance exercise. They start with a simple tiered intake process. They refine the risk classification as they review more use cases. They build institutional knowledge about which vendors meet their standards and which do not. They report to the board on AI adoption trends, not just individual approvals.
This is governance maturity — and most community health organizations are at stage zero. Not because they are negligent, but because the available frameworks were designed for organizations with ten times their staff and budget.
Where to Start
If your organization has no AI governance framework, do not start by forming a committee. Start by inventorying what AI is already in use. You will find more than you expect. Then classify each tool by the four risk factors above. That classification tells you which tools need immediate attention and which are fine where they are.
If your organization has a framework that nobody uses, the problem is almost certainly calibration. You built one pipeline for all AI, and the pipeline is too heavy for most use cases. Add the tier structure. Give your compliance director authority to expedite Tier 1 approvals without committee review. Reserve the committee for Tier 2 and 3, where the review actually matters.
Either way, governance that matches your organization's size, resources, and patient population will outperform any borrowed template from a system that looks nothing like yours.
Assess your organization's AI governance readiness.
LumenHealth provides AI governance assessment and readiness tools for community health organizations. This article is informational and does not constitute legal, regulatory, or compliance advice.
Assess your organization's AI governance readiness
37 questions across five domains. Free facilitated debrief with your leadership team.