Curated + custom prompt injections for AI models, focusing on security and exploit development.
Explore prompt injection techniques in large language models (LLMs), providing examples to improve LLM security and robustness.
An introductory class on understanding AI security risks and mitigation strategies.
A benchmark for prompt injection detection systems, providing a neutral way to evaluate their performance.
This repository provides a benchmark for prompt Injection attacks and defenses.
Every practical and proposed defense against prompt injection.
An open-source toolkit for monitoring Large Language Models (LLMs) with features like text quality and sentiment analysis.
The Security Toolkit for LLM Interactions, ensuring safe and secure interactions with Large Language Models.
Open-source LLM Prompt-Injection and Jailbreaking Playground for testing LLM security vulnerabilities.
A GitHub repository focused on ChatGPT jailbreaks, prompt leaks, and prompt security techniques.
A prompt injection scanner for custom LLM applications.
A guide for understanding and mitigating prompt attacks on large language models.