
The official implementation of a pre-print paper on prompt injection attacks against large language models.

A steganography tool for encoding images as prompt injections for AIs with vision capabilities.

A benchmark for evaluating prompt injection detection systems.

Ultra-fast, low latency LLM security solution for prompt injection and jailbreak detection.

A GitHub repository showcasing various prompt injection techniques and defenses.

A practical guide to LLM hacking covering fundamentals, prompt injection, offense, and defense.

The automated prompt injection framework for LLM-integrated applications.

Learn about a type of vulnerability that specifically targets machine learning models.

A collection of examples for exploiting chatbot vulnerabilities using injections and encoding techniques.

Every practical and proposed defense against prompt injection.

A prompt injection scanner for custom LLM applications that tests vulnerabilities in LLM systems.

LLM Prompt Injection Detector designed to protect AI applications from prompt injection attacks.