GenAI: A Double-Edged Sword for Cybersecurity & Data Protection

Posted April 24, 2024 | Technology |
GenAI: A Double-Edged Sword for Cybersecurity & Data Protection

When it comes to cybersecurity and data protection, generative AI (GenAI) is a double-edged sword: cybercriminals and state-sponsored actors can use the technology to conduct sophisticated attacks while cybersecurity and data-protection vendors can employ large language models (LLMs) to enhance the capabilities of their solutions to counter evolving threats. This Advisor takes a closer look at GenAI threats and the LLM solutions being used to fight them.

Evolving Threats

Most of us have encountered poorly written and formatted email scams at one time or another — some so bad that they are almost laughable. Recently, I have noticed an increase in scams that are well-written and quite authentic looking. Some have attributed this trend to the availability of easy-to-use GenAI tools that allow even relatively inexperienced users to generate malicious threats based on “human-like” written text.

Typical scams include email phishing and social media lures personalized to appeal to specific audiences. They often include images of company brands and logos. Together, these qualities can make such scams appear quite convincing to even the computer-savvy. They also make it difficult for cybersecurity tools and the security professionals using them to identify actual threats among legitimate communications. The goals of these scams include deceiving respondents into revealing personal identifying information by clicking on fake links, stealing financial passwords and corporate log-in credentials, and other nefarious uses intended to compromise online security.

Hackers can also utilize GenAI tools to create code that carries out sophisticated attacks, such as breaking into enterprise databases and applications or installing ransomware and other malware on corporate networks, resulting in extortion schemes or data breaches. More recent developments involve voice-replication technology and GenAI video tools that allow cybercriminals to create deep fakes — both videos and life-like voice recordings — used to impersonate people to commit fraud or disseminate disinformation.

Evolving Solutions

Cybersecurity and data-protection vendors are responding to evolving threats by implementing LLMs to enhance their solutions. Key application areas include threat intelligence and risk assessment, data classification and loss prevention, and incident response.

Threat Intelligence & Risk Assessment

LLMs are being widely employed to improve threat intelligence and risk assessment. These models are trained on huge volumes of historical security data. They also utilize current, near-real-time security incident data to identify and prioritize threats to provide organizations with insights into potential threats and vulnerabilities. Typical implementations employ a commercial LLM in conjunction with a custom-built/proprietary (i.e., domain-specific) security model.

For example, Microsoft Copilot for Security combines OpenAI’s GPT-4 and Microsoft’s own proprietary security model. GPT-4 serves for understanding natural language inputs and generating natural language output, including for interpreting complex queries and providing insights into possible threats. While Microsoft’s proprietary model is tailored specifically for security-related tasks, it was trained to incorporate a growing set of security-specific skills and is supported by Microsoft’s unique global threat intelligence and more than 65 trillion daily signals.

Threat intelligence specialist Recorded Future has also incorporated OpenAI’s GPT model into its Intelligence Cloud platform. According to the company, the model was trained on over 10 years of threat analysis data amassed by its threat research division. The platform automatically collects and structures data related to both adversaries and victims from text, imagery, and technical sources, applying natural language processing and machine learning to analyze and map threat insights across billions of entities in real time.

Data Classification & Loss Prevention

Data-protection vendors are employing GenAI to automate the identification/ classification of data based on its context and according to its degree of sensitivity. GenAI is also used to apply access controls based on employee’s “right to know” to prevent corporate data from leaking (accidentally or nefariously) outside the organization.

For example, Cyberhaven has developed an autonomous AI agent (Linea AI) designed to prevent corporate data leaks. Linea AI differs from earlier AI technologies like user entity and behavior analytics that examine single events or simple correlations between events; it examines the entire workflow across time for each piece of data in an organization.

Linea AI uses what the company calls a “Large Lineage Model” that is designed to analyze corporate workflows and predict the next-likely action or behavior to occur. In this manner, it can detect employees mishandling sensitive data when the next action deviates from what is likely or predicted. The model has been trained to identify sensitive data, even if the company does not know it exists (or has not defined it within a policy). It also understands risks to data and surfaces risky incidents.

Incident Response

Cybersecurity vendors are increasingly incorporating GenAI into their platforms to enhance their incident-response capabilities. One of the most important uses is for filtering and prioritizing alerts, allowing security experts to focus on the most pressing threats. Another use is for automating routine tasks. This includes correlating and analyzing data on attacks, guiding the best course of action needed to rectify a situation, and report generation — including summarizing the findings in plain language to provide context to security incidents immediately confronting analysts. In effect, automating such time-consuming tasks frees up security experts to address higher-level strategic activities.


GenAI appears to be lowering the barrier to entry for developing sophisticated cyberattacks and scams. However, the technology also offers solutions to help meet the evolving threats imposed by hackers, cybercriminals, and state actors. In short, GenAI brings significant benefits to cybersecurity, particularly in the areas of threat intelligence and risk assessment, data classification and loss prevention, and incident response. Going forward, we should expect to see cybersecurity and data-protection vendors incorporate multiple LLMs — trained to support specific security needs — within their platforms and tools.

Finally, I’d like to get your opinion on the use of GenAI in cybersecurity. As always, your comments will be held in strict confidence. You can email me at or call +1 510 356 7299 with your comments.

About The Author
Curt Hall
Curt Hall is a Cutter Expert and a member of Arthur D. Little’s AMP open consulting network. He has extensive experience as an IT analyst covering technology and application development trends, markets, software, and services. Mr. Hall's expertise includes artificial intelligence (AI), machine learning (ML), intelligent process automation (IPA), natural language processing (NLP) and conversational computing, blockchain for business, and customer… Read More