By Tina Joros, JD (Veradigm), Chair, and Stephen Speicher, MD, MS (Flatiron Health), Vice Chair, EHR Association AI Task Force
The regulatory landscape for AI is on the cusp of dramatic change. As we await the release of the AI Action Plan called for in the January 2025 Removing Barriers to American AI Innovation Executive Order and as federal agencies review their existing AI policies, we have seen an influx of proposed laws at the state level to address concerns about the use of AI in healthcare technology. This increase in state-level activity may lead to regulatory fragmentation that makes it more complex for EHR vendors and health systems to build and support AI tools designed to help advance patient care.
This increased focus on AI is highly relevant to EHR Association members who continue to deploy software capabilities that comply with EHR certification program transparency requirements for the use of Decision Support Interventions (DSI) involving AI and machine learning (ML) capabilities. Most of our member companies are also developing, piloting, or have already deployed generative AI solutions that can be leveraged to resolve many challenges confronting the healthcare industry.
AI also impacts our long-term commitment to advancing health IT to improve healthcare delivery and patient outcomes—a commitment that led to the 2024 launch of our AI Task Force as a channel for member companies to share their input regarding regulatory and other activity meant to help guide the responsible, safe, and equitable use of AI in healthcare.
Core Legislative Issues
Since its inception, the AI Task Force has developed a set of key principles we believe are most critical to the safe and responsible development of AI in the healthcare space. These principles—which build on priorities we originally shared in a 2023 letter to the Senate HELP Committee on the role of government in regulating the AI industry—reflect our position on current issues impacting AI governance.
Specifically, we strongly prefer that AI regulation take place at the federal level to ease the burden of compliance and create uniform reporting and transparency standards.
Specifically, we strongly prefer that AI regulation take place at the federal level to ease the burden of compliance and create uniform reporting and transparency standards. We shared that sentiment with the HELP Committee in January in a second AI-focused letter that also spelled out a number of core issues we would like to see addressed in legislation at all levels.
A priority is for regulations to focus on technologies with direct implications for high-risk clinical workflows by leveraging existing frameworks that stratify risk based on the probability of occurrence, severity, and positive impact or benefit. Doing so would ease the reporting burden on all technologies incorporated into an EHR that may be utilized at the point of care.
A priority is for regulations to focus on technologies with direct implications for high-risk clinical workflows by leveraging existing frameworks that stratify risk based on the probability of occurrence, severity, and positive impact or benefit.
We also strongly recommend that, for true high-risk use cases, developers incorporate “human-in-the-loop” or “human override” safeguards during the development and implementation of these tools, along with other reasonable transparency requirements. One of the key elements to ensuring the human override principle adds value and mitigates risk is to ensure that the actual human in the loop is appropriately trained for intervention.
Other core issues we’d like to see addressed legislatively include:
- Liability: We encourage the use of existing frameworks for medical malpractice that may involve AI technologies. EHR developers should not be responsible for harm caused by the inappropriate use of an AI tool for a particular patient when they have otherwise responsibly developed AI and provided the necessary transparency to ensure users fully understand the tool’s functionality. Clinicians are best positioned to evaluate the appropriateness of an AI-enabled tool for a specific patient and to obtain informed consent when required. Therefore, they and other end users should bear ultimate liability for an AI tool’s use.
- Supporting all healthcare organizations in AI deployment: To avoid widening the nation’s already significant digital divide, we encourage the development of regulations that are manageable and applicable for both large health systems and small independent clinics to ensure equitable access to AI tools regardless of organization size. Regulations and guidance for the adoption, use, and post-implementation monitoring of AI tools must be reasonable and considerate of diverse care settings and capabilities. This will maximize the opportunity for widespread adoption and effective use of AI technologies.
- Outcomes-Focused Regulations: Regulations should prioritize outcomes and risk mitigation rather than prescribing technical specifications for how AI tools must be built. The healthcare sector and its emphasis on patient safety requires distinct considerations compared to consumer technologies in other areas, differences that should be reflected in regulations. Regulators should focus on addressing their primary concerns and the most significant risks to patients. Regulations should allow companies to enhance their current software development lifecycle to incorporate appropriate safeguards rather than mandating specific steps and stages for development, training, and implementation.
The Importance of Ongoing Monitoring
Finally, we agree that ongoing post-deployment monitoring of AI models is essential to ensure quality and mitigate the risk of model drift, particularly given the nature of generative AI. The constantly evolving health information and data landscape heightens the risk of models becoming outdated over time. As such, we support rules that require transparency regarding EHR developers’ plans for keeping information up to date after the initial launch and visibility into when a model was last updated.
This approach will help maintain the overall quality of AI models and provide end users with the necessary information to determine whether a tool operates appropriately for a particular patient. Transparency in update practices is key to maintaining trust and reliability in AI tools, ultimately improving patient care.
