Category
Explore by categories

Prompt injection explained
A comprehensive overview of prompt injection vulnerabilities and potential solutions in AI applications.

Prompt Injection Cheat Sheet
Explore Prompt Injection Attacks on AI Tools such as ChatGPT with techniques and mitigation strategies.

Simon Willison’s Weblog
A blog discussing prompt injection vulnerabilities in large language models (LLMs) and their implications.

Embrace The Red
Explores security vulnerabilities in ChatGPT plugins, focusing on data exfiltration through markdown injections.

Learn Prompting
A resource for understanding prompt injection vulnerabilities in AI, including techniques and real-world examples.

AIPromptJailbreakPractice
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.

llm-security-prompt-injection
This project investigates the security of large language models by classifying input prompts to discover malicious ones.

PromptSafe
Save your precious prompt from leaking with minimal cost.

securityGPT
Protect your GPTs through secure prompts to prevent malicious data leaks.

promptmap
A prompt injection scanner for custom LLM applications.

vigil-jailbreak-ada-002
A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.

