There is no healthcare AI law. There is no single statute you can read, no unified compliance checklist you can hand to your team, no certification you can earn that means "done." People use this fact to justify inaction. That is a mistake.
What exists instead is worse: a distributed, accelerating patchwork of requirements forming across federal agencies, state legislatures, and international bodies — each with its own enforcement mechanism, each with its own timeline, and none of them waiting for the others to finish.
If you are a compliance director, CEO, or legal counsel at a community health organization, the question is not whether AI regulation will affect you. It is whether you will be ready when it does.
The Patchwork Is the Regulation
There is no single AI law because AI touches everything. And everything already has a regulator.
- FDA is expanding its oversight of clinical decision support software. The agency's framework for Software as a Medical Device (SaMD) already applies to AI tools that inform diagnosis or treatment. If your organization uses AI-assisted clinical tools, FDA classification and reporting requirements may already apply — and the framework is tightening, not loosening.
- HIPAA was written before machine learning existed, but OCR has made clear that AI systems processing PHI are covered entities or business associates. Model training on patient data, de-identification standards for AI datasets, and breach notification when an algorithm exposes protected information — all of this falls under existing HIPAA enforcement. The rules have not changed. The technology they apply to has.
- ONC interoperability rules under the 21st Century Cures Act impose transparency and data-sharing requirements that directly affect AI systems integrated with EHRs. Information blocking provisions do not carve out algorithmic decision-making.
- CMS conditions of participation govern what hospitals and health centers must do to receive Medicare and Medicaid reimbursement. As CMS incorporates AI-related quality measures and reporting requirements, organizations that cannot demonstrate governance over their AI tools risk their participation status.
- State legislatures are moving faster than Congress. Colorado's AI Act requires impact assessments for high-risk AI systems, including those used in healthcare decisions. Illinois has biometric and algorithmic transparency requirements. California's proposed AI regulations would impose disclosure and audit obligations. These are not theoretical — Colorado's law is in effect.
None of these regulators are coordinating a unified rollout. Each is moving on its own timeline, with its own enforcement priorities. That is the regulatory environment you are operating in right now.
The Voluntary Window Is Closing
For the past several years, healthcare AI governance has been largely voluntary. Industry coalitions like the Coalition for Health AI (CHAI), along with guidance from the AMA and AHA, have produced frameworks and principles. These are useful. They are also optional.
That is changing. The trajectory from voluntary framework to mandatory requirement follows a pattern visible across every regulated industry: early adopters self-regulate, regulators study the landscape, incidents create political pressure, and mandatory requirements follow. Healthcare AI is somewhere between stages two and three.
Executive orders on AI safety have directed federal agencies to develop sector-specific requirements. HHS has published its own AI strategy. The EU AI Act classifies most healthcare AI as high-risk, requiring conformity assessments, human oversight documentation, and ongoing monitoring — and any US organization serving EU patients or partnering with EU entities will need to comply.
The organizations that treat voluntary frameworks as a ceiling — "we adopted CHAI principles, so we're covered" — will discover that voluntary adoption does not satisfy mandatory requirements. The organizations that treat voluntary frameworks as a floor — building governance infrastructure that exceeds current guidance — will find that compliance is an incremental adjustment, not a crisis.
Anticipatory Compliance Is Cheaper Than Retrofitting
Here is the practical argument: building AI governance now, before requirements are finalized, costs a fraction of retrofitting compliance after enforcement begins.
This is not speculative. It is the same math every healthcare organization has already done with HIPAA, Meaningful Use, and information blocking. Organizations that built compliant infrastructure early absorbed the cost gradually. Organizations that waited until enforcement scrambled to hire consultants, rewrite policies, retrain staff, and remediate systems — all under deadline pressure and audit risk.
AI governance follows the same pattern, with one difference: the attack surface is larger. A single AI tool can touch clinical decision-making, patient data, billing, workforce management, and quality reporting simultaneously. Retrofitting governance across all of those domains after the fact is not just expensive. It is operationally disruptive in ways that a community health organization — already running on thin margins — cannot easily absorb.
What anticipatory compliance looks like in practice:
- Inventory your AI tools. Every algorithm, model, automation, and decision-support system in use — including ones embedded in vendor products. You cannot govern what you have not cataloged.
- Map each tool to its regulatory exposure. Which tools touch PHI? Which inform clinical decisions? Which affect billing or reimbursement? Which operate in states with AI transparency laws? Each mapping identifies a compliance obligation that either exists now or is forming.
- Establish governance documentation. Risk assessments, validation records, human oversight protocols, bias monitoring, incident response procedures. These are the artifacts regulators will ask for. Creating them now, when you have time to be thoughtful, is categorically different from creating them under audit.
- Assign accountability. AI governance without clear ownership is a policy document, not a program. Someone — a role, a committee, a named individual — must be responsible for ongoing compliance as requirements evolve.
What Safety-Net Providers Get Wrong
Community health organizations face a specific version of this challenge. FQHCs, critical access hospitals, tribal health programs, and safety-net providers often assume that regulatory attention will focus on large health systems first, and that smaller organizations will have time to follow.
This assumption is wrong in two ways.
First, CMS conditions of participation apply uniformly. A critical access hospital using an AI-powered sepsis alert has the same documentation obligations as a 500-bed academic medical center. Regulatory requirements do not scale down.
Second, community health organizations are disproportionately exposed to state-level regulation. An FQHC operating in Colorado is subject to the Colorado AI Act today — not when Congress gets around to passing a federal law. A tribal health program using AI tools for population health management may face both state and federal regulatory scrutiny across multiple jurisdictions.
The organizations most likely to be caught off guard are the ones that assumed they were too small to matter.
The Cost of Waiting
Every month without governance infrastructure is a month of unmanaged risk accumulation. AI tools are being adopted, workflows are being built around them, staff are developing dependencies on them, and none of this is being documented in ways that will satisfy the regulators who are — right now — drafting the requirements.
When those requirements arrive, and they will, the question regulators ask will not be "do you have AI?" It will be "show us your governance." The organizations that can produce an inventory, risk assessments, validation records, oversight protocols, and accountability structures will demonstrate compliance. The organizations that cannot will demonstrate negligence.
The absence of a single AI law is not a grace period. It is the period in which the organizations that act will separate from the organizations that react.
Your organization's AI governance readiness is measurable. LumenHealth's assessment identifies where you stand, what gaps exist, and what to prioritize — before regulators make that determination for you. Take the assessment.
This article is provided for informational purposes and does not constitute legal, compliance, or regulatory advice. Organizations should consult qualified legal counsel regarding their specific regulatory obligations. LumenHealth provides AI governance assessments for community health organizations and is not affiliated with any regulatory agency.
Assess your organization's AI governance readiness
37 questions across five domains. Free facilitated debrief with your leadership team.