Healthcare applications for emerging technologies like artificial intelligence, predictive analytics, and wearable monitors represent an exciting new era filled with promise. These technologies have been shown to improve quality and safety for patients and increase capacity for providers, yielding better outcomes at a reduced cost. But their impact on medical liability is less certain.
In a civil tort involving professional liability, a physician’s duty and standard of care would be measured against what a reasonable and prudent physician would do under the same circumstances. When AI is involved, measuring against an established best practice may be more challenging. “We don’t have a lot of history using these technologies for liability, which makes it difficult to establish a best practice,” says risk-management expert Denise Shope, RN. Shope is a team leader with RCM&D Insurance, an independent insurance advisory firm based in Baltimore, Maryland.
Since artificial intelligence is new and not yet well understood by the general public, Shope says, it’s too early to know how technologies like AI will impact medical liability. “The intersection of AI with professional liability lawsuits really hasn’t been tested in our courts,” she says. “We just haven’t seen enough litigation to understand how AI will impact the medical legal system.”
That doesn’t mean that organizations can’t manage the risk associated with emerging technologies. As medicine advances into virtually uncharted territory, Shope says that organizations and practitioners need to use a new framework for understanding and managing risk.
How should healthcare organizations manage the risk that comes with adopting new technologies? Carefully, thoughtfully, and proactively, according to Shope. “I think about professional liability and medical malpractice in terms of components or domains of risk,” she notes. “I suggest taking an enterprise-wide risk-management approach. Consider all the possible risks by domains or components, then determine whether risk controls are in place for each identified risk.”
Using an enterprise-wide risk management framework, organizations should consider each of the following “domains of risk” when adopting AI or any emerging technology.
DOMAIN 1: CLINICAL AND PATIENT SAFETY CONSIDERATIONS
Ultimately, advances in medical technology are designed to improve patient care. So it makes sense to begin a risk-management evaluation by considering patient safety. Considerations in this domain include patient consent and understanding, patient selection, scientific evidence and best practices, and the technical competency of providers and support staff.
Questions to answer may include:
- What systems or processes need to be in place to ensure that patients understand and consent to remote monitoring?
- How will we safely select the patients best suited to the use of this technology?
- What does science say about the best practices involving the use of this technology?
Using new technology in a clinical setting means spending extra time educating patients and obtaining consent, says Shope. “Anytime you’re using any type of remote monitoring, the patient needs to understand what this means and what their role is, and the patient has to consent,” she says.
Finally, organizations need to consider how providers will learn the new technology, and how technical competency will be assessed immediately and in the long term.
“Another consideration is the technical competency of providers, nurses, and support staff,” says Shope. “What do they need to be competent in, how do we teach them, and who is responsible for teaching them?”
DOMAIN 2: LEGAL AND REGULATORY CONSIDERATIONS
When evaluating the legal risks involved in adopting AI or other emerging technologies, considerations include whether the application in question is FDA-approved, who owns the data, how risks will be shared contractually, and whether those risks are insurable.
Although the medical-liability implications for AI are still emerging, physicians can rest assured that some things remain unchanged, Shope says. “The plaintiff’s attorney may go after the large healthcare organization or mega-group practice for vicarious liability or other actions, knowing that additional buckets of insurance are available,” she says. “But the burden of proof is still with the plaintiff.”
Providers should also know that the use of AI doesn’t change their obligation to view patients as individuals, she notes. Though AI offers valuable decision-making support to providers, they are the decision-makers in collaboration with patients. “AI uses big data to guide decisions, but providers still need to look at patients as individuals,” says Shope.
“Anytime a physician uses AI in making a differential diagnosis, it doesn’t absolve the physician from liability.”
DOMAIN 3: HAZARD CONSIDERATIONS
Before adopting any new technology, organizations need to assess how new systems will perform in extreme circumstances, from weather-related disasters to power outages to global pandemics. Planning for a system failure allows for a more comprehensive picture of the risks involved in adopting new technologies and enables
organizations to begin developing contingency plans alongside the rollout of those technologies.
Considerations in this domain include environment support, maintenance agreements, capacity management, and contingency plans for business interruptions. Organizations need to go through scenario-based risk assessment to anticipate or predict uncertainty, says Shope.“Basically, this means anticipating scenarios like pandemics,” she says.“What happens if we have a surge in demand? Technology is great when it works, but organizations need to have a multidisciplinary team to help predict where the failures will be.”
DOMAIN 4: TECHNICAL CONSIDERATIONS
Before integrating new technology into clinical settings, organizations need to think outside the exam room to include non-clinical staff and technological support. Who will support and maintain the technology, repair it when it fails, ensure interoperability between systems, and safeguard the privacy and security of patient data?
Another consideration is whether adopting new technology leaves the organization vulnerable to cyber attacks. Organizations need to have a plan in place for maintaining cyber hygiene to minimize interruptions to patient care in the event of a data breach or cyber crime, says Shope. (See "Cyber Attacks on the Rise," pg. 20).
DOMAIN 5: STRATEGIC CONSIDERATIONS
Emerging technology like AI is exciting, to be sure. But organizations should be wary of adopting it without first considering how it aligns with their business strategy, says Shope. Managing the risk associated with adopting new technology requires careful consideration of the organization’s strategic priorities. Before taking on new tech, Shope says, organizations should understand how the new technology aligns with their risk appetite. Smaller group practices may have a lower tolerance for risk than larger groups or hospitals with deeper pockets, for example. And in some cases, the potential ROI of a new technology may warrant greater risk.
Evaluating how a particular technology fits into an organization’s approach to risk helps manage vulnerabilities, says Shope.“There’s a risk inherent to adopting a technology that doesn’t have a strong business case, because you’re using limited resources for something that doesn’t have a strong ROI,” she notes.
DOMAIN 6: FINANCIAL CONSIDERATIONS
Finally, a risk assessment should include a close look at the organization’s financial strengths, weaknesses, and goals. Does the organization have the financial resources to invest not only in the new technology, but in the training, technology support, and contingency planning it will require?
A proactive risk assessment helps healthcare organizations of any size determine whether adopting a new technology is a safe, smart financial investment. “Providers in small group practices should consider whether they have the resources to help with a proactive risk assessment,” says Shope.
“If they don’t, providers can check with their insurance broker, who can do that sort of risk assessment for them.”
Another consideration in this domain that Shope points out is insurance coverage: providers should work with insurance professionals who understand the risk involved in adopting new technology. “Physicians and their insurance brokers must work together in managing the new risks involving the use of AI,” she says. “Insurance professionals need to understand the risk. No one wants to be left uninsured because of insurance-contract language or exclusions in the insurance policy, so coverage is critical.”
Denise Shope, RN, MHSA, ARM, CPHRM, can be reached at DShope@rcmd.com. Shope is a risk management consultant with RCM&D who began her 25-year career in healthcare as a registered staff nurse at Geisinger Medical Center in Pennsylvania. She is past president of the American Society for Health Care Risk Management (ASHRM), and she received ASHRM’s Distinguished Fellow Award in 2018.