
JailBench is a comprehensive Chinese dataset for assessing jailbreak attack risks on large language models.

Breaker AI is an open-source CLI tool for security checks on LLM prompts.

Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

Short list of indirect prompt injection attacks for OpenAI-based models.

Ultra-fast, low latency LLM security solution for prompt injection and jailbreak detection.