Is Your Organization Ready for AI?

Best Practices Can Help You Avoid Pitfalls and Costly Liability 

Image - artificial intelligenceArtificial intelligence (AI) isn’t new to healthcare. But in recent years, AI technologies have advanced rapidly, fueled partly by the growing use of smartphones and wearable health-monitoring devices that can capture large amounts of data.

AI offers tantalizing benefits but also substantial risks for medical practices, hospitals, and payers. If your organization is considering adopting AI technology, it’s essential to lay the groundwork for success and reduce the risk of liability. Francisco Rodríguez-Campos, principal project officer for device evaluation at ECRI, recently provided leaders at Physician Insurance/MedChoice with critical insight on this crucial topic. 

Why AI? 

AI can analyze large amounts of data stored by healthcare organizations, from images to clinical research trials and medical claims. The technology can identify patterns, preferences, and insights beyond human comprehension and capabilities. Developers can integrate AI into a device or create a standalone application. Some healthcare tech companies even provide access to “marketplaces” of AI apps that can run on their devices, offering an à la carte experience depending on your organization’s needs.

A few practical applications of AI:

  • Automating many aspects of patient appointment scheduling 
  • Flagging high-priority medical imaging studies for expedited diagnosis and treatment
  • Using machine-learning models to predict populations at risk for particular diseases or hospital readmission

Like every technology, AI comes with risks. These risks include privacy concerns, cyberattack-related security issues, and unintended harm associated with using flawed, incomplete, or biased data.

Establishing an Oversight Committee

Before you consider a specific AI technology for your institution, Rodríguez-Campos recommends establishing an AI committee that will provide you with ongoing oversight regarding:

  • AI’s impact on your medical practice or facility
  • How your organization measures the success of its AI integrations
  • The quality of your organization’s data 
  • The risk of potential harm, including technical, social, ethical, and legal impacts
  • Policies that monitor, audit, and assess the AI applications you use

Vetting Your Vendors

Once you’ve started “shopping” for a specific AI technology, Rodríguez-Campos advises performing vendor assessments. “You need vendors who are here to stay, who have the resources to support and audit the applications,” he explains. “Perform an in-depth risk assessment of the application you choose, and make sure all leaders agree on its risk/benefit profile.”

Educating and Training Your Teams

Buy-in from people who work throughout your organization—including executive leaders, physicians, care-team members, and administrative staff—is essential. Rodríguez-Campos urges leaders to encourage stakeholder alignment and participation by doing the following.

  • Assess your stakeholders’ knowledge and attitudes about AI early in the adoption process
  • Use the insights gathered through the stakeholder assessment to guide training efforts and tech roll-out
  • Tailor communications and training based on these insights, and invite conversation to address any concerns or suggestions

Eyes Wide Open

As healthcare organizations continue to explore AI integrations, government regulatory agencies are playing catch-up. Existing regulations do not sufficiently cover self-learning technologies—and they focus on safety, not broader ethical risks such as bias and harm resulting from erroneous decision-making. Organizations that lay the groundwork for success and work to mitigate risk will avoid costly medical liability pitfalls—and maximize their return on investment.