There's no doubt that you've heard about OpenAI's November 6th release of their custom GPTs marketplace. This was a HUGE development as it furthered what they've done with ChatGPT to democratize access to AI. What is it? What are its capabilities? What risks are associated? What examples already exist of risks becoming reality? And how can you learn more?
Are you one of these people? Then you should join us on Tuesday, December 12!
CIOs or IT Leaders
Learning & Development Leaders
ChatGPT's Custom GPT Marketplace
The ChatGPT custom GPT Marketplace is like a digital tailor shop for AI language models. Just as a tailor customizes clothing to fit your specific style and needs, this marketplace allows businesses and individuals to tailor language models to their unique requirements.
In this marketplace, you can find specialized GPTs, layered on top of ChatGPT's model, that have been fine-tuned for specific tasks or industries. For example, imagine a legal firm that needs an AI model to understand and generate legal language accurately. They can go to this marketplace and find a ChatGPT custom GPT that’s already trained or can be further trained in legal terminology and document drafting.
The Exploding Risks of Prompt Injection
These new capabilities are so exciting! Unfortunately, bad actors are also excited because they see tremendous opportunity to behave badly. Their go-to tool is what's known as prompt injection.
Imagine you’re having a conversation with ChatGPT (a robot). It's designed to respond based on the words or ‘prompts’ that you give it. Prompt injection is like someone sneakily slipping in a confusing or misleading sentence into your conversation with the robot. Because ChatGPT, or whatever model you're using, trusts and follows the instructions it’s given, it can be tricked into saying or doing something that it shouldn’t. This could mean giving out wrong information, or in a worst-case scenario, leaking sensitive data or performing actions that could be harmful.
The risk presents here when someone who understands how the AI works decides to exploit this, manipulating the AI into behaving in unexpected or harmful ways. It’s like having a really smart parrot that repeats everything it hears — if someone says the wrong things to it, it could repeat things that cause problems. Bottom line, these prompt injections can lead to misinformation, security breaches, and other negative outcomes. Next, I describe a recent attack.
Real-World Prompt Injection Attack
In a recent cybersecurity incident, the collaborative workspace platform Notion became the stage for a clever prompt injection attack. Here’s how the scenario unfolded:
An attacker, aiming to exploit the platform’s reliance on AI for content creation, crafted a Notion page that seemed innocuous at first glance. However, the page title was laced with a hidden command. When an unsuspecting user, lured by the attacker, visited the page and used the “Create a template” feature, the underlying Large Language Model (LLM) misinterpreted the title as a direct instruction.
Acting upon the malicious title, the AI obediently generated a template. But this was no ordinary template—it contained a phishing link ingeniously disguised within what appeared to be legitimate content. The trap was set.
The user, trusting the familiar environment of Notion, was now at risk of clicking the link, which could lead to data leakage, financial loss, or worse, compromising personal or company security.
This incident highlights the ingenuity of cyber attackers and underscores the importance of vigilance in even the most routine online activities. It serves as a cautionary tale for users and platforms alike to stay ahead in the cybersecurity game, where AI’s capabilities can be a double-edged sword.
Empowering YOU with Understanding
Although I'm WILDLY excited about this new marketplace and the capabilities that are being built and released by people all over the world, there are associated risks that accompany it such as prompt injections. I want to ensure that you understand both the exciting capabilities and the ways to mitigate any cybersecurity downsides.
If this sounds like a scary techie discussion to you, rest assured that I make topics like this accessible to everyone. It's my personal super power 💪🏻 Knowledge is power and everyone deserves to be empowered with their own understanding!
Online (Free) Event Details
Join me on December 12th at the Cybersecurity CIO Forum hosted by Premier Connects.
Title: “Navigating the risks of prompt injection in AI integration."
I’ll be sharing my thoughts about the exploding risks created by custom GPTs on LLMs, showcasing examples from ChatGPTs newly launched marketplace and an exploitation of Notion.
I’ll cover the following:
➡️ The emergence and risks of prompt injection in AI systems
➡️ Real-world examples and business implications of prompt injections
➡️ Strategies to mitigate prompt injection risks
⚡️I have a free Education / Communication content plan for anyone who attends!
This keynote is designed to elevate understanding across all professionals. It’s not a deep technical dive, rather, ensuring leaders know the risks, have real-world stories they can tell to raise awareness, and know where/how to assign resources.
In conclusion, the release of OpenAI’s custom GPTs Marketplace marks a significant advancement in AI accessibility and capabilities, tailored to specific industry and use case needs. However, this innovation brings with it the challenge of prompt injection, a cybersecurity threat that can manipulate AI into harmful actions, as exemplified by the recent Notion incident. It’s crucial to understand these risks while embracing the potential of AI.
Join us on December 12th at the Cybersecurity CIO Forum to gain insights into navigating these challenges, understand real-world implications, and learn strategies to mitigate risks. This event is an opportunity for you to empower yourself with knowledge and strategies for safe AI integration in your field.
See you there!