AI in Healthcare Demands Vigilant Security Measures

The increased integration of AI in healthcare offers the potential for dramatic improvements in outcomes. But it’s not without risks.

Poornima Apte, Contributor

July 25, 2024

4 Min Read
chart showing key AI security vulnerabilities

The tiniest pin-prick tumors might escape even the most trained human eye, but they might not evade the keen observation of an AI system. AI-based healthcare tools use machine learning models trained on large volumes of data, allowing them to recognize tumor patterns and detect them in new cases.

AI use cases in healthcare are exploding – from detecting cancers and optimizing hospital bed counts to precision medicine applications. However, as the adoption of AI applications grows, so do the associated security vulnerabilities.

Security Vulnerabilities Compounded by AI

Healthcare institutions are already stretched thin by new methods of dispensing care, like remote monitoring (from sensor-based devices) and telehealth. Outdated technology infrastructure and inadequate security policies are exacerbating the strain, said Erik Barnett, North America advisory healthcare and life sciences lead at Avanade, a professional services company.

“Additionally, the large number of smart medical devices connected to online systems has created new points of entry for attackers,” Barnett explained.

All that data sloshing around increases the risk of breaches. Patient health information (PHI) records are extremely sensitive, and the integration of AI introduces vulnerabilities and attack methods, such as injection attacks on chatbots, Barnett said.

Related:5 Best Practices for Achieving Healthcare Cloud Compliance

sidebar graphic discussing the ethical use of AI systems in healthcare

When security breaches happen in healthcare, “healthcare systems suffer a disruption in patient care continuity, through interruptions in healthcare services, billing processes, and access to electronic health records, leading to potential financial and operational setbacks for organizations,” Barnett said. Compromised PHI can violate compliance with privacy laws like the Health Insurance Portability and Accountability Act (HIPAA).

Strategies for Enhancing AI Security

Given the vulnerabilities AI might introduce, how should healthcare systems use the technology? Bjorn Andersson, senior director of global digital innovation marketing and strategy at Hitachi Vantara, suggested deploying AI in the stages – crawl, walk, run – to reflect increasing difficulty and risks. “Remember the credo, ‘First do no harm,’” he advised.

The initial patient interaction is the “crawl” stage, Andersson explained. “The ‘walk’ part is when you operationalize the use of AI in your healthcare operations. One key challenge at this stage is [to] make sure you use the right data and have some level of oversight based on domain knowledge and an escape mechanism when human expertise is required.”

Related:5 Key IT Certifications to Stand Out in the Healthcare Industry

Meanwhile, the “run” stage “involves increasing the level of automation, which also potentially increases the risks if you have [not] built the trusted foundation in the previous stages,” he said.

Securing AI starts with basic cybersecurity practices like investing in strong authentication and access controls, said Rachel Jiang, senior vice president of product and technology at healthcare company TailorCare. She recommended encrypting and segregating patient data and ensuring secure transmission.

“Healthcare companies should partner with their customers and outside experts to establish security and privacy standards and follow emerging regulatory guidance,” Jiang said. Furthermore, healthcare companies should regularly perform audits and educate their employees.

Securing the Future of Healthcare

Establishing a secure foundation is essential as AI in healthcare continues to grow at an impressive clip. According to research firm Grand View Research, the market for AI in the healthcare will grow at a compounded annual growth rate of 40.2%, reaching USD 173.55 billion by 2029.

AI will significantly impact the field in various ways. For instance, it will accelerate patient-clinical trial matching processes, enabling quicker access to innovative treatments, Barnett said. Additionally, AI will proactively identify disease patterns, preventing widespread shortages and improving healthcare resource management.

Related:How to Ensure Data Protection, IT Security, and Compliance in Life Sciences

Increased AI integration will come with heightened scrutiny and compliance requirements, however. Richard Watson-Bruhn, U.S. head of digital trust and cybersecurity at PA Consulting, predicts that AI will face more regulation. Europe’s AI Act is already in effect, and in the U.S., Colorado passed the “Consumer Protections for Artificial Intelligence” bill in May 2024. Starting February 2026, the bill will mandate assessments and disclosures for AI developers and deployers in high-risk use cases, including healthcare services.

Healthcare facilities need advanced security strategies to protect data in a more AI-integrated environment. Confidential computing, which protects data during storage, transit, and computation, will gradually become more important. Confidential computing relies on encrypting data and conducting operations in a trusted execution environment, or TEE, within the computer’s hardware.

Apart from strict data-handling protocols, the development of AI solutions will demand diverse patient representation to ensure fairness and avoid bias, Barnett noted. In addition, “developing patient control mechanisms in mobile solutions allows them to set health goals and choose their level of AI support, fostering trust and enhancing personalization in their health journey,” he said.

Immense opportunities await institutions that harness AI ethically and securely. “The widespread integration of AI will change how patients are treated, how we learn about medicine, and even how hospitals run,” Barnett predicted.

About the Author

Poornima Apte

Contributor

Poornima Apte is a trained engineer turned writer who specializes in the fields of robotics, AI, IoT, 5G, cybersecurity, and more. Winner of a reporting award from the South Asian Journalists’ Association, Poornima loves learning and writing about new technologies—and the people behind them. Her client list includes numerous B2B and B2C outlets, who commission features, profiles, white papers, case studies, infographics, video scripts, and industry reports. Poornima reviews literary fiction for industry publications, is a card-carrying member of the Cloud Appreciation Society, and is happy when she makes “Queen Bee” in the New York Times Spelling Bee.

https://www.linkedin.com/in/poornimaapte/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like