Artificial intelligence continues to push cybersecurity into an unprecedented era as it offers benefits as well as disadvantages by helping both aggressors and protectors.
Cybercriminals are using artificial intelligence to launch more sophisticated and unique attacks on a larger scale. Cybersecurity teams use the same technology to protect their systems and data.
Dr. Brian Anderson is the Chief Digital Health Clinician at MITER, a federally funded nonprofit research organization. will speak in HIMSS 2023 Healthcare Cybersecurity Forum In a panel discussion entitled “Artificial Intelligence: Friend or Foe of Cybersecurity?” Other members of the panel include Eric Lederman of Kaiser Permanente, Benoit Desjardins of UPENN Medical Center, and Michel Ramim of Nova Southeastern University.
We interviewed Anderson to help break down the implications of both offensive and defensive AI and examine the new risks it has introduced. ChatGPT and other types of generative AI.
Q: How exactly does the presence of artificial intelligence raise cybersecurity concerns?
a. There are several ways in which AI poses fundamental cybersecurity concerns. For example, nefarious AI tools can pose risks by enabling denial of service attacks, as well as brute force attacks on a specific target.
AI tools can also be used in “model poisoning,” an attack in which software is used to corrupt a machine learning model to produce incorrect results by injecting malicious code.
In addition, many of the free AI tools available – such as ChatGPT – can be tricked with rapid engineering methods into writing malicious code. In healthcare in particular, there are concerns about protecting sensitive health data, such as protected health information.
Sharing PHI in claims for these publicly available tools may lead to data privacy concerns. Many health systems struggle with how to protect systems from allowing this type of data sharing/leakage.
Q: How can AI benefit hospitals and health systems when it comes to protection from the bad actors?
a. Artificial intelligence has been helping cybersecurity experts identify threats for years now. Several AI tools are currently used to identify threats and malware, as well as detect malicious code embedded in programs and forms.
Using these tools — with a human cybersecurity expert always in the loop to ensure proper compliance and decision making — can help health systems stay one step ahead of bad actors. AI trained in adversarial tactics is a powerful new set of tools that can help protect health systems from attacks enhanced by malevolent models.
Generative models such as Large Language Learning Models (LLMs) can help protect health systems by identifying and predicting phishing attacks or reporting malicious bots.
Finally, mitigating insider threats such as leakage of protected health information or sensitive data (for example, for use in ChatGPT), is another example of some of the emerging risks to which health systems must develop responses.
Q: What are the cybersecurity risks presented by ChatGPT and other types of generative AI?
a. ChatGPT and future iterations of existing GPT-4 and other LLMs will become increasingly effective at writing new code that can be used for nefarious purposes. These generative models also pose privacy risks, as I mentioned earlier.
Social engineering is another concern. By producing detailed scripts or transcripts, and/or being able to reproduce a familiar voice, the potential exists for LLMs to impersonate individuals in an attempt to exploit vulnerabilities.
I have one last thought. It is my sincere belief as a physician and media expert that, with the proper safeguards, the positive potential of AI in healthcare far outweighs the potential negative.
As with any new technology, there is a learning curve to identify and understand where vulnerabilities or risks lie. And in an consequential space like health care – where the well-being and safety of patients is at stake – it is critical that we move as quickly as possible to address these concerns.
I look forward to gathering in Boston with the HIMSS community, so committed to advancing healthcare technology innovation while protecting patient safety.
Anderson session,”Artificial Intelligence: Friend or Foe of Cybersecurity? It is scheduled to take place at 11 a.m. on Thursday, September 7, at HIMSS 2023 Healthcare Cybersecurity Forum in Boston.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.