top of page

A new attack vector called prompt injection

Apr 12, 2023

Side-channel attack vector on prompts

According to Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency (CISA), Large Language Models (LLM) and Generative Pre-trained Transformer (GPT) are the biggest threats we'll face this century.

This is Katy Craig in San Diego, California

Basically, these advanced AI technologies can be used for good, but they can also be used for evil. For example, LLM and GPT can be used to create fake news stories or social media posts that look totally legit. And let's be honest, we're all susceptible to falling for clickbait every now and then. But this is much worse. Plus, these technologies can automate cyber attacks, making them faster, more efficient, and more difficult to detect.

But the real kicker is a new attack vector in LLM called Prompt Injection (PI). Basically, an attacker can inject malicious instructions into a prompt that a language model is trained on, causing it to produce some seriously bad output or ignore previous instructions. What’s worse is we humans may not know or understand why or when it’s happening!

So, what does all of this mean? Well, it means we need to be careful and stay on our toes. And if you're like me, you might want to brush up on your GPT skills and learn how to detect and defend against these threats.

This is Katy Craig. Stay safe out there.

bottom of page