Posted: 4 Min ReadExpert Perspectives

RSA 2023: Hype and Reality: How to Evaluate AI/ML in Cybersecurity

Generative AI - Risks and Benefits

At Symantec, we often get questions from our customers about ChatGPT. What are the risks? What are the benefits? Not surprisingly, AI is a hot topic at the RSA Conference this week. In her session,  Hype and Reality: How to Evaluate AI/ML in Cybersecurity, Diana Kelley, the CSO2 (Chief Strategy Officer/Chief Security Officer) and co-founder of Cybrize, explored the hype around Generative AI to better understand whether it is ready for “prime time” and what organizations should consider when evaluating AI-based cybersecurity technologies.

In her introduction, Kelley reminded the audience about the introduction of Eliza — a natural language processing program that could convincingly mimic short human conversations — in the 1960s. When Eliza was introduced, “people thought computers were going to take over the world. It didn’t happen.” She pointed to self-driving cars as another AI-hype example. In 2015, Elon Musk said Tesla vehicles would drive themselves in two years, which has not happened. “The challenge was tougher than we realized. It doesn’t mean we won’t have it, but not as fast as we would like,” she said.

Generative AI: Ready for Prime Time?

Kelley went on to say that Generative AI is a descendant of Eliza and HAL 9000, the fictional artificial intelligent character of the film, “2001: A Space Odyssey.” “It is just sci-fi for now,” she added. Generative AI, like ChatGPT and DALL-E are trained on a large amount of images and data. Generative AI is great for brainstorming — doing research against a trusted corpus of knowledge — but it is not a fully trusted system yet. “Not all the results are accurate. How many of us doubt what the computer tells us? We are going to trust these systems so they have to be outlets we can trust,” Kelley said.

At Symantec, we take a similar position that Generative AI is not perfect today, but, then again, it’s not like vaporware — there is promise here and it is going to happen quickly. We have been using it internally and looking at use cases, including how to create more value from our petabyte-sized security intelligence. 

That said, while these models continue to be tuned to improve accuracy, there are three areas where Generative AI creates potential cybersecurity and privacy risks:

  • Data leakage: Without thinking twice, users can input sensitive info or other confidential company information into Generative AI systems such as ChatGPT and, intentionally or unintentionally, expose PII and put the reputation of their company at risk. Not only could they be uploading sensitive documents ('please summarize this document') or asking queries that leak sensitive corporate information, but the information and queries may be integrated back into the system’s model and provided as answers to other users.   Symantec Enterprise Cloud has unveiled a solution that tackles this problem by providing our customers with the guardrails to ensure they gain Visibility and apply Data Security Controls to these conversations.
  • Copyright issues: Generative AI is also being used to generate content such as code, images, and documents.  However, you do not know the source of that content.  Anyone using systems like ChatGPT, Midjourney, or Github Copilot to construct content needs to understand the origination of that content may not be copyright free. You could be integrating code into applications you produce or publishing documents and images that are improperly licensed -  resulting in copyright infringement
  • Abuse by attackers: Generative AI systems can help construct better phishing emails and yes, they can even write some code. But malware code or a phishing email message body is really only 1% of the entire effort required for attackers to breach a network.  These systems are information content development tools and not robots — you can ask it 'tell me all the common ways to infect a machine', but you cannot ask it to 'infect these machines.'  Further, Generative AI systems are not inventing any novel attacks, but rather recapitulating existing techniques.  At best, they can make a mediocre attacker more efficient, but security solutions already have to protect against the most sophisticated techniques, so Generative AI will not provide attackers the upper hand.  In fact, if anything, defenders will benefit from Generative AI solutions more than attackers.

Returning back to Kelley’s talk, we agree that “AI is about trust point — do you trust what is coming out of it? There will be a learning curve how these tools – and the results that they generate — will be used.”  With Generative AI solutions still producing ‘hallucinations’ and inaccurate results, some of which can be very subtle, early adopters currently will need to ensure subject matter experts are reviewing results before trusting these systems.  

Looking Ahead 

Overall, AI will provide benefits to defenders and in addition to providing solutions so customers can safely use Generative AI systems today, we are tuning models for use throughout Symantec products and services. Just as the internet is intertwined in everything today, AI too will just be like a utility. To learn more, we invite you to talk to Symantec about the risks — and benefits — that our AI and ML technologies can provide.    

About the Author

Eric Chien

Director, Symantec Enterprise Division, Broadcom

Chien leads a team of engineers and threat hunters that investigate and reverse-engineer the latest high-impact Internet security attacks. Via these attack techniques and trends, he develops and drives threat intelligence and novel security solutions to prevent and mitigate against the next big attack.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.