The Dangers of Generative AI: Privacy and Security Risks

Shedding light on the cybersecurity and privacy pitfalls for businesses globally of large language models and other emerging artificial intelligence technologies

Click here to listen to this article via the BreachRx Blogcast

In recent years, generative artificial intelligence (AI), especially the impressive capabilities demonstrated by generative pre-trained transformer models, more commonly known by the acronym GPT, have emerged as a groundbreaking development in the technology landscape. From creative content generation, image creation and manipulation, to enhancing software development and natural language processing tasks, generative AI has rapidly found extensive applications. Large language models, known as LLMs, as well as diffusion models for image generation have greatly simplified the ability for users to dialog with these AI tools on a variety of topics, from software development to research and even creative writing. 

However, beneath the surface of these technological marvels lies a set of risks that pose a serious threat to individuals and organizations alike. The dangers of generative AI are beginning to become clear. From disclosure of proprietary and personal information including intellectual property, deepfakes, and malicious use of LLMs to create well-formed spam and phishing emails, these tools have created a new risk surface that businesses need to be prepared to address.

6 Game-Changing Trends Impacting Incident Reporting and How to Keep Up

Top trends shaping global cybersecurity & privacy incident reporting

What is Generative AI?

Generative AI refers to a class of AI models that have the ability to create new content, whether it be text, images, or even audio, based on patterns learned from vast datasets drawn mostly from the open Internet and user inputs. GPT, a prime example of generative AI, has been trained on an extensive corpus of diverse texts, enabling it to generate human-like text with astounding coherence and creativity from user prompts. This technology has the potential to revolutionize the way we create and consume content, but its impacts on their operations must be considered by businesses.

While this technology holds immense promise in various industries, it also poses serious risks to privacy and data security. Despite its numerous benefits, the ability of generative AIs to generate content from vast datasets, including user inputs, raises the alarm for malicious use and accidental exposures. As these AI technologies become more sophisticated, it is important to consider the potential for misuse. To that end, it’s becoming evident that the line between creative expression and threats to organizations are easily blurred. 

Privacy Incidents: Inadvertent Disclosure

Most people understand the nature of confidential and proprietary information, and do their best to protect it. Unfortunately, accidents happen, and an entire class of incidents–inadvertent disclosures–describes the unintentional release or exposure of sensitive information or data to unauthorized individuals or the public. These incidents often occur due to human error, technical glitches, or misconfigurations, leading to potential privacy breaches and data exposure. While many companies face frequent incidents including inadvertent disclosures, particularly in financial services and health care, the accidental release of information is most often to a single or small set of individuals. With generative AI, however, that’s changing.

Large numbers of employees are attempting to use GPT models to aid in their day-to-day work. Examples include executives pasting full strategy documents into their prompts, developers copied company code, and medical practitioners have used protected health information to diagnose patients. Given the AI engines are training on user input, this has led to sensitive data and personally identifiable information being added to the training sets of many AIs, exposing that data to other users. Researchers have shown that it’s possible for confidential information to be captured from a single document and to be drawn out of AI models with relative ease.

Businesses need to plan for what to do if a disclosure of their confidential data as well as customer information is discovered by the users of an AI engine. Even if inadvertent, the disclosure of personally identifiable information or confidential data would violate most global privacy regulations and could lead to severe legal consequences for the organization responsible. 

Privacy incidents like these can be devastating to organizations, particularly if they’re not proactively prepared, yet they’re just the tip of the iceberg.

Deepfakes: Disinformation and Information Warfare

Deepfakes have emerged as a potent weapon in the realm of disinformation and information warfare. Deepfakes, powered by generative AI, are hyper-realistic manipulated media, including videos and audio, that convincingly portray individuals saying or doing things they never did themselves. Deepfakes have become a double-edged sword, presenting both entertainment value and posing significant risks. On one hand, they enable filmmakers and content creators to produce stunning visual effects and bring fictional characters to life like never before. On the other hand, by manipulating politicians, celebrities, or other influential figures, malicious actors can create and circulate fabricated videos that mislead the public, fuel political unrest, and manipulate public opinion. 

This revolutionary technology, while impressive in its capabilities, has raised grave concerns about its potential for spreading false information, destabilizing societies, and eroding trust in the authenticity of online content. Examples abound. The US Federal Trade Commission has warned that malicious actors are integrating AI into their capabilities for sale. One reporter used a combination of video and audio AI deepfakes to fool her bank, interviewees, and even acquaintances. A fake image generated by artificial intelligence of an explosion near the Pentagon caused a stock market dive. In fact, the chairman of the US Securities & Exchange Commission anticipates a future where AI leads to the next financial crisis.

The domains of psychological and information warfare go back decades, as nations have learned to use communications technologies to affect the populations of their adversaries, targeting their cultures to their elections and bases of their governments. AI is the latest technology to enable further attacks in those veins, and understanding the potential consequences of deepfakes is crucial in navigating the evolving landscape of information warfare and safeguarding the integrity of truth and accuracy.

Given this, as AI image manipulation becomes increasingly sophisticated and more accessible, incident responders need to plan for the impact of a disinformation attack on their business. This will rely far more on communications leaders than other types of incidents. 

Combating the spread of false information demands concerted efforts for business leaders. That, however, does not cover the full spectrum of emerging threats from artificial intelligence they need to consider.

Cyber Attacks & Fraud

The online environment is brutal for companies around the world. Billions have been invested in cybersecurity defenses, and yet data breaches and incidents continue. The average incident globally costs $4.35M with as few as 2,000 consumer records involved, with much higher costs in certain geographies and industries. Defenders are in an arms race with attackers to protect their organizations that never seems to end.

Unfortunately, GPTs and large language models offer a new means for threat actors to generate new forms of attack, improve their existing malware, and reduce the time it takes to develop and deploy new malicious capabilities. Fortunately, at least at the moment, direct malicious code generation seems nascent, with some largely simple examples of malicious code being demonstrated online. That hasn’t stopped malicious actors from purportedly stealing over 100,000 ChatGPT credentials, likely at least in part for their own use.

More consequently and to that end, cybercriminals have begun using generative AI to craft sophisticated spam and phishing emails that imitate legitimate sources, making them harder to distinguish from authentic messages. The language capabilities of GPTs make them far more capable of building believable messages than those attacks commonly used in the past. In fact, GPT capabilities are being developed specifically for this purpose. In one recent instance, a tool known as WormGPT was advertised in underground forums for exactly this purpose.

By capitalizing on human psychology and social engineering, these AI-generated phishing emails can trick individuals into divulging sensitive information or clicking on malicious links, leading to devastating data breaches. Organizations can expect increased employee and customer compromise and fraud losses from these more sophisticated attacks. Responding to such incidents requires businesses to be proactively prepared. New methods will require not only new advanced techniques and automation to mitigate the risks from these emerging threats, but also new processes, procedures, and training of security teams to be prepared to deal with them.

Given These Threats, Incident Response Must Evolve 

Generative AI presents unique challenges for the incident response teams in organizations, and these examples just touch the surface of this new threat. Traditional cybersecurity methods typically focus on identifying known threats and malicious patterns, but it’s expected that AI methods will generate novel and sophisticated attacks that may bypass conventional defense mechanisms and training. In addition, incidents caused from AI aren’t necessarily related to cybersecurity and may be more oriented to privacy and legal teams, putting additional pressure on businesses to integrate and crosstrain their teams.

While generative AI undoubtedly promises transformative benefits, its dark side casts a looming threat over data privacy and security. Incident response teams must adapt their strategies to tackle the challenges posed by novel AI-generated threats. Simultaneously, businesses must expect privacy and cybersecurity laws and regulations will evolve to keep up with the pace of technological advancements, introducing new challenges for their legal and compliance teams. Integrating these requirements into your incident response program is already crucial, and will only grow so moving forward.

To harness the full potential of generative AI responsibly and stay vigilant against the rising tide of incidents stemming from the misuse of this powerful technology, organizations must develop policies and guidelines for the use of AI that prioritize data protection, invest in robust cybersecurity measures and technology automation, and proactively train and prepare their teams across the business for the breadth of implications from these new technologies. Only by striking a delicate balance between innovation and responsibility can we navigate this new era of AI-driven advancements while safeguarding our security and privacy.

Need help improving your security posture?

Use BreachRx to build tailored incident response playbooks and exercise your team today!

Recent Posts

Categories