Healthcare AI programs are getting squeezed from both sides: voluntary governance guidance is becoming more specific, while state law is moving toward mandatory transparency. For data and privacy teams, the near-term work is documentation, vendor controls, and provable auditability—not just better models.
Joint Commission + CHAI publish responsible AI guidance; Texas HB 149 requires patient disclosure starting Jan. 1, 2026
In September 2025, the Joint Commission and the Coalition for Health AI (CHAI) released guidance titled The Responsible Use of AI in Healthcare, laying out operational best practices for healthcare organizations deploying AI. The guidance emphasizes HIPAA-aligned controls and governance fundamentals—encryption, access controls, incident response readiness, and updating Business Associate Agreements (BAAs) with vendors supporting AI workflows.
Separately, Texas House Bill 149 (TRAIGA) adds a concrete compliance deadline: beginning January 1, 2026, healthcare providers in Texas must clearly disclose to patients when AI is used in their care. The requirement applies even when AI use may seem “obvious,” putting the burden on providers to standardize disclosure language and ensure it’s consistently triggered across clinical and administrative touchpoints where AI is involved.
- Compliance shifts from intent to evidence. Guidance plus a disclosure mandate means teams need artifacts: where AI is used, which data it touches, and which controls were applied. Expect audit logs, model/version tracking, and decision provenance to become table stakes for healthcare deployments.
- Vendor risk becomes a first-order engineering problem. If AI vendors or platform providers handle PHI, BAAs and security addenda must reflect AI-specific data flows (training, fine-tuning, telemetry, retention). Data leads should map these flows now to avoid last-minute contract and architecture churn.
- Disclosure requirements force product and workflow changes. “AI used in care” is broader than a single model in a single app. Teams will need a practical definition, an internal registry of AI-enabled systems, and a mechanism to trigger patient-facing disclosure reliably.
- Synthetic data becomes more than a research convenience. For privacy engineers, synthetic patient records can reduce exposure of real PHI in development and testing—especially for sensitive conditions—while still supporting model iteration. But it won’t eliminate governance needs around real-world deployment, monitoring, and disclosure.
