How Generative AI is Creating New Cybersecurity Threats at Scale

Avatar for Trevor ParksBy Trevor Parks|May 25, 2023|9:48 am CDT

With the release of ChatGPT in November of 2023, generative AI — a branch of artificial intelligence that creates new text, images, video, and other content — went mainstream. Tech journalists touted the transformative potential of the technology, and everyday people started wondering how it might make their lives better. Generative AI’s newfound position in the spotlight also brought with it discussions of its ethical use and how to prevent malicious and inappropriate applications, dangers that become increasingly apparent when viewed through the lens of cybersecurity.

Not only can generative AI tools be used to take social engineering to new levels by creating more sophisticated attacks with more convincing text, images, video, and voice to target victims, they also lower the barriers to entry for cybercriminals. Previously, hackers needed some combination of coding, writing, and/or graphic design skills to launch a convincing attack. Now almost anyone with the desire and access to the internet can do so.

This means enterprise technology and cybersecurity leaders face new threats at a potentially unprecedented scale. Let’s take a look at how organizations can identify potential dangers and begin the strategic review and planning process needed to prepare for these challenges quickly.

How Generative AI Is Being Used to Create More Convincing Scams

  • Fake written content: Generative AI can create fraudulent content and digital interactions, including real-time conversations, to impersonate users and elevate social engineering and phishing attacks. It also enables non-native English speakers to refine messages and avoid common linguistic pitfalls.
  • Fake digital content: AI can be used to generate fake content at scale, including avatars, social media profiles, and malicious websites that can be used to collect credentials and user information.
  • Video deep fakes: Believable deep fake videos can be used to deceive users into taking action and divulging credentials, potentially undermining the effectiveness of employee cybersecurity training.
  • Voice deep fakes: Audio AI tools are available that can be used to simulate the voices of managers and senior executives, leaving fraudulent voice memos or other communications with instructions for staff.
  • Fake documents: Combining written copy and image/video generation capabilities, generative AI can create authentic-looking documents and authorizations that can be used to breach defenses.

How Generative AI is Creating New Threat Vectors

  • AI-written code: Companies and developers who leverage generative AI, in search of efficiency, to develop code for applications and plug-ins that connect to existing infrastructure and applications may inadvertently create significant cybersecurity gaps. It is increasingly important that all organizations insist on code provenance or a Software Bill of Materials (BoM) to know if AI was involved in the process.
  • Multiplication of threat vectors: Generative AI can be used to amplify threats, once an initial breach has occurred. AI tools can be used to modify code at scale, quickly giving control to attackers. These tools can also be trained on a dataset of known vulnerabilities and used to automatically generate new exploit code to target multiple vulnerabilities in rapid succession.
  • Reconnaissance at scale: Cybercriminals can use generative AI to scan massive amounts of company data, summarizing it to identify employees, relationships, and assets, potentially leading to user impersonation, blackmail, or coercion.
  • Exposure of proprietary information: Well-intentioned employees can inadvertently expose internal information to large language models, like ChatGPT, which assimilate the information and use it to train the AI. While individual users can opt out of data collection, it is not foolproof and is leading some companies to ban their employees from using generative AI for business purposes.
  • Prompt injection: This new type of threat involves hijacking a language model’s output to allow an attacker to bypass user prompts and return any text the attacker desires. This can be used to covertly inject malicious code into responses generated by the AI.
  • Integrations & APIs: As the development of apps and features built on top of leading generative AI models continues to grow, integrations and APIs will create new doorways into corporate networks and potential gaps in security.

What Can Businesses Do to Help Protect Themselves?

Generative AI is enabling cybercriminals to scale attacks in terms of speed, volume, and variety. To counter this, companies should start by thoroughly reviewing their security postures, including assessing current systems, identifying vulnerabilities, and making the necessary adjustments to enhance protection. 

Employee training initiatives should also be re-evaluated, as cybersecurity is a shared responsibility across the organization. By educating employees about the potential dangers of generative AI and teaching them how to identify and respond to threats, companies can help mitigate the risk of attacks.

Organizations looking to stay ahead of emerging AI-based threats should also consider the following:

  • Look at adopting AI security and automation tools to enhance your defenses and help level the playing field. AI can help take noise out of the system by distinguishing real threats from false alarms and freeing your security staff to focus only where they are needed most.
  • Focus on addressing threats faster by using security services like Endpoint Detection and Response (EDR) to provide real-time feedback on unfolding threats at the network edge.
  • Apply Zero Trust Network Access (ZTNA) and Secure Access Service Edge (SASE) approaches to move trust away from the network perimeter and instead continuously monitor users, devices, and activities within your network.

You Don’t Have to Go It Alone

The challenge of keeping up with the rapid pace of change in cybersecurity threats can be daunting, even for the most experienced IT teams. This is where a managed security service provider (MSSP) can help by bringing extensive expertise, resources, and up-to-date knowledge to the table. By partnering with an MSSP, organizations can effectively offload some of the burden of monitoring and responding to threats, allowing them to focus on their core business functions.