NEW AI GUIDANCE FOR HOSPITALS AND HEALTH SYSTEMS FROM
THE JOINT COMMISSION AND THE COALITION FOR HEALTH AI
On September 17, 2025, Joint Commission ("JC") and
Coalition for Health AI ("CHAI") jointly released
Guidance on
the Responsible Use of
AI in Healthcare (the
"Guidance"). JC is the oldest and largest
standards-setting and accrediting body in health care in the United
States. CHAI is a coalition of nearly 3,000 organizations,
including health systems, patient advocacy groups, and a wide range
of industry leaders and start-ups across the health care and
technology ecosystems. CHAI's stated
mission is to be the trusted source of guidelines for
responsible use of artificial intelligence ("AI") in
health that serves all, and it aims to ensure high-quality care,
foster trust among users, and meet growing health care needs. The
Guidance is instructive for hospitals and health care systems
considering implementation of AI in various settings, including
operations, finance, and administration. As hospitals pursue
AI-driven efficiencies and cost savings, the Guidance provides a
critical framework for ensuring that the drive for operational
improvements does not compromise patient safety or data
security.
The Guidance makes recommendations to health care organizations
regarding use and deployment of AI. The JC developed the Guidance
based on surveys of its accredited hospitals and health systems to
address their specific needs in implementing AI responsibly. The
Guidance's focus on recommendations for health care provider
organizations sets it apart from many other AI standards and
requirements, which have primarily focused on AI technology
developers and health insurers. The Guidance addresses the full
spectrum of organizational responsibilities, from establishing
governance structures and data security protocols to monitoring AI
performance, assessing bias, and training staff on appropriate
use.
The Guidance states that JC and CHAI developed the Guidance
based on industry standards for development, deployment, and use of
AI and communications with industry stakeholders. The Guidance
addresses the entire life cycle of an AI tool, from initial
procurement and validation to ongoing monitoring and staff
training. This holistic approach establishes a comprehensive
framework for managing the technology's risks and benefits. The
Guidance is structured around the following seven foundational
elements that JC and CHAI say they have designed to create a
comprehensive framework for responsible use of AI in health
care:
The Guidance indicates that, in the coming months, the JC and
CHAI intend to release a series of Responsible Use of AI Playbooks
to build on and to operationalize the Guidance, and the JC intends
to develop a voluntary Responsible Use of AI certification.
Comparison to Other Health AI Requirements and Guidance
The Guidance joins myriad other requirements and
guidance regarding the use of AI in health care. For example,
the Food and Drug Administration ("FDA") regulates AI in
medical devices (including certain software products it considers
medical devices). Health information technology ("health
IT") certification under the Health IT Certification Program
of the Office of the National Coordinator for Health IT within the
U.S. Department of Health and Human Services requires disclosures
of certain source attributes specified in federal health IT
certification criteria, and developers must adopt certain risk
management practices to assess, mitigate, and oversee risk
presented by AI tools. The Centers for Medicare & Medicaid
Services ("CMS") has issued guidance on the use of AI
tools by Medicare Advantage plans. And some states have passed laws
regulating the use of AI in health care. For example,
Cal. Health
& Safety Code § 1339.75 (known as the
Artificial Intelligence in Health Care Services Bill)
requires health care facilities, clinics, and physician offices to
disclose when generative AI is used to
communicate clinical information to patients. Texas's
Responsible Artificial Intelligence Governance Act, effective
January 2026, is a comprehensive AI regulation that applies across
multiple sectors, including health care. For health care providers
specifically, the law mandates disclosure to patients when AI is
used in diagnosis or treatment and requires licensed practitioners
to review all AI-generated records and retain ultimate
responsibility for clinical decisions. Utah requires
regulated occupations, including licensed health care
professionals, to prominently disclose when AI is used in
"high- risk" interactions involving health information or
medical advice. These state laws emphasize transparency, human
oversight, and accountability in clinical AI deployment.
The Guidance provides a different perspective than many of the
pre-existing sources by recommending a comprehensive framework for
use of AI tools in health care, with a focus on hospitals and
health systems. The involvement of the JC, which accredits the
majority of U.S. hospitals and sets widely recognized standards for
health care quality and safety, gives the Guidance additional
gravitas.