Like every other industry, the healthcare sector is learning how to coexist with the emergence of Generative AI. But unlike other industries, getting this right can be a matter of life and death.
Some organizations, unsure how to protect themselves from potential security risks, are taking the path of least resistance by trying to block AI products like ChatGPT and Bard.
But erecting ersatz roadblocks in the form of new firewalls or proxy rules is a losing battle. I understand the impulse. But more AI applications come online all the time and many employees just ignore warnings against using unauthorized digital services or devices that aren’t formally approved and supported by the IT department. Like it or not, experience teaches that users are going to find ways around IT, no matter how many messages they receive from management.
What’s the Problem?
Several healthcare and life sciences apps already use artificial intelligence, presenting operators with a constellation of new privacy and security challenges. For example, think about the daily routine of a busy clinician finishing up a patient encounter with only a couple of minutes from one patient encounter to the next. Using ChatGPT or another Generative AI tool, she might opt to copy and paste their transcript of the visit into the system to return a summarized version with corrected spelling that then gets pasted back into the patient’s official medical records. Easy peasy, right?
Not so fast.
Generative AI can impact your cybersecurity in several ways. It’s being used by bad actors to design better attacks and phish email copy. There are now fake Gen AI portals luring unsuspecting users, not to mention better malware design and deep fakes.
We have little idea how long data gets retained by services. That’s a troublesome prospect considering how ChatGPT has already had more than one data breach (due to an open-source vulnerability that revealed other users chat histories.)
What’s more, there’s no guarantee that any Protected Health Information in the text is going to be secure against leaks. Where does that information go? How is it being stored? Another potential privacy problem: a service’s terms and conditions clearly state that any data presented to them becomes their property and can be reviewed by staffers.
Adding to the confusion, note that when it comes to the output of Generative AI, there's a growing legal consensus that anything being generated - including generated images or generated text – cannot be copyrighted because it’s nigh impossible to attribute sourcing.
We’re still in the early innings of the Generative AI phenomenon but as you can see, the lack of transparency about the data that’s being collected is not clear. When does something that a healthcare provider submit to the model get retained and become part of the training for the model? When does it not? And then how is that data processed? How are decisions being made?
Generative AI can impact your cybersecurity in several ways.
For example, Microsoft is integrating the Co-pilot assistant feature, which is actually ChatGPT, into the Windows 11 operating system and Office 365. That’s going to do a lot of things – from comparing documents and transcribing Skype calls to analyzing your calendar or taking a Word document and summarizing it into a PowerPoint. That’s great from a productivity standpoint but it also raises a host of new privacy questions about patient data that’s being handled whether in emails or chats. How will all that data be protected or segmented between organizations?
Those are very valid questions and until there are clear answers, it’s up to providers to take preventative measures to mitigate the risks.
Prepare for the Coming Tsunami
As I noted earlier, blocking won’t work. It’s akin to whack-a-mole because employees will find a way to incorporate the new technology. In addition to Microsoft, Google is adding AI to Google Cloud, MakerSuite, and Workspace. LinkedIn, Slack, and Teams are already using ChatGPT while Box has integrated ChatGPT as well. To the degree possible, healthcare providers should proactively try and figure out how to handle the new data questions that are surfacing with the spread of Generative AI deployments.
Here are three things to consider when preparing for the coming tsunami:
- Many vendors, partners, customer, and supplier agreements include provisions for third-party software and open source licensing. Make sure that you check the terms before randomly processing anyone else's data. (Both OpenAI and Chat GPT have said that they will enter into business associate agreements for healthcare customers.)
- If you are going to opt into data sharing with an API agreement, carefully review the data access rules. Understand who will have access to PHI information and for what purposes. That’s especially important for providers with customers who reside in the European Union, which is in the midst of voting on the Artificial Intelligence Act, which will include binding rules for what are considered high risk uses, including healthcare.
- Also, become fast friends with your chief privacy officers. Understand the concept of privacy by design. In practice, it requires organizations to consider privacy and data protection concerns early in the design stages of building products and services, not waiting until after the products and services are in use. The privacy by design concepts fit very well into an AI by design environment. So, learn how data is being collected and used. Where is it coming from? Where is it being stored and where's it going? Who's going to have access to it and how do you plan to control it?
I recently conducted a webinar (CLICK HERE) where I delve into the more technical tactics you can adopt to limit your risks. However you plan to meet the challenge, understand that no health provider has the luxury of postponing this any longer. Ready for or not, Generative AI is heading your way and when it arrives, it is going to hit like a tsunami.
We encourage you to share your thoughts on your favorite social platform.