Category
Explore by categories

Folly
Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.

Prompt Injection Playground
A GitHub repository for testing prompt injection techniques and developing defenses against them.

JudgeDeceiver
GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

PromptInjectionBench
A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

Pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package.

StruQ
Official implementation of StruQ, which defends against prompt injection attacks using structured queries.

llm-prompt-injection-filtering
Uses the ChatGPT model to filter out potentially dangerous user-supplied questions.

Tensor Trust
A prompt injection game to collect data for robust ML research.

last_layer
Ultra-fast, low latency LLM security solution for prompt injection and jailbreak detection.

Open-Prompt-Injection
This repository provides a benchmark for prompt Injection attacks and defenses.

Vigil
Vigil is a security scanner for detecting prompt injections and other risks in Large Language Model inputs.