Medical Care
Healthcare AI: Ensuring Successes & Avoiding Accidents
2024-12-03
Given the rapid spread of AI in U.S. healthcare, it's no surprise that unintended effects are emerging. While some may be pleasant, others pose risks. To navigate this landscape, healthcare organizations and AI developers must collaborate. Two researchers emphasize this in a recent JAMA opinion piece.

Strengthening Healthcare with AI Safety and Transparency

Conducting Real-World Clinical Evaluations

Before implementing AI-enabled systems into routine care, it's crucial to conduct or wait for real-world clinical evaluations published in high-quality medical journals. As new systems mature, healthcare organizations should conduct independent testing with local data to minimize patient safety risks. Iterative assessments should accompany this risk-based testing to ensure the systems benefit patients and clinicians while being financially sustainable and meeting ethical principles.

For example, imagine a hospital considering a new AI-powered diagnostic tool. By waiting for proper evaluations, they can ensure its accuracy and reliability before relying on it for patient care. This not only protects patients but also builds trust in the use of AI in healthcare.

Moreover, different medical specialties may have specific requirements for AI systems. By conducting local evaluations, organizations can tailor the use of AI to meet the unique needs of their patients and clinicians.

Involving AI Experts in Governance

Inviting AI experts into new or existing AI governance and safety committees is essential. These experts can include data scientists, informaticists, operational AI personnel, human-factors experts, and clinicians working with AI. Regular meetings of these committees allow for the review of new AI applications, consideration of safety and effectiveness evidence before implementation, and the creation of processes to monitor AI application performance.

For instance, a data scientist can provide insights into the data used by the AI system, ensuring its quality and relevance. An informaticist can help integrate the AI system into the healthcare workflow seamlessly. Human-factors experts can focus on how clinicians interact with the AI, minimizing potential errors.

By having a diverse group of experts involved, healthcare organizations can make more informed decisions about AI implementation and ensure its safe and effective use.

Maintaining an Inventory of AI Systems

Healthcare organizations should maintain and regularly review a transaction log of AI system use, similar to the audit log of the EHR. This log should include details such as the AI version in use, date/time of use, patient ID, responsible clinical user ID, input data, and AI recommendation or output. The AI committee should oversee ongoing testing to ensure the safe performance and use of these programs.

Let's take a hospital's radiology department as an example. By maintaining an inventory of their AI-enabled radiology systems, they can easily track which systems are in use, by whom, and for what patients. This allows for quick identification of any issues and enables proactive monitoring of system performance.

Regular reviews of the inventory help organizations stay updated on the status and usage of their AI systems, ensuring they are meeting the needs of the patients and clinicians.

Creating Training Programs for Clinicians

Initial training and subsequent clinician engagement with AI systems should include a formal consent-style process with signatures. This ensures that clinicians understand the risks and benefits before accessing the AI tools. Steps should also be taken to ensure patients understand when and where AI systems are used and the role of clinicians in reviewing the output.

For example, a training program for cardiologists using an AI-based heart disease diagnosis system might include detailed explanations of how the AI works, its limitations, and the importance of clinician review. Clinicians would sign a consent form indicating their understanding and agreement to use the system.

By providing clear instructions and engaging clinicians in the process, healthcare organizations can enhance the safe and effective use of AI in clinical practice.

Establishing a Reporting Process for Safety Issues

Developing a clear process for patients and clinicians to report AI-related safety issues is crucial. A rigorous, multidisciplinary process should be implemented to analyze these issues and mitigate risks. Healthcare organizations should also participate in national postmarketing surveillance systems to aggregate and analyze safety data.

Imagine a situation where a patient experiences an unexpected outcome after using an AI-enabled surgical system. With a clear reporting process, both the patient and the clinician can quickly report the issue. A multidisciplinary team can then investigate and take appropriate actions to prevent similar incidents in the future.

Participating in national surveillance systems allows for a broader analysis of safety data and the sharing of best practices among different healthcare organizations.

Providing Disabling Authority for AI Systems

Similar to preparing for EHR downtime, healthcare organizations must have policies and procedures in place to manage clinical and administrative processes when the AI is not available. Clear written instructions should enable authorized personnel to disable, stop, or turn off AI-enabled systems 24/7 in case of an urgent malfunction.

For instance, during a power outage or a system failure, having the ability to quickly disable the AI system ensures the safety of patients and allows for a smooth transition to manual processes. Regular assessments of how AI systems affect patient outcomes, clinician workflows, and system-wide quality are also essential.

If AI models fail to meet pre-implementation goals, revisions should be considered. If revisions are not feasible, the entire system may need to be decommissioned to protect patient safety and maintain the integrity of the healthcare system.

more stories
See more