
Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.

A GitHub repository for testing prompt injection techniques and developing defenses against them.

GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

Easy to use LLM Prompt Injection Detection / Detector Python Package.

Official implementation of StruQ, which defends against prompt injection attacks using structured queries.

Uses the ChatGPT model to filter out potentially dangerous user-supplied questions.

A prompt injection game to collect data for robust ML research.

Ultra-fast, low latency LLM security solution for prompt injection and jailbreak detection.

This repository provides a benchmark for prompt Injection attacks and defenses.

Vigil is a security scanner for detecting prompt injections and other risks in Large Language Model inputs.