Tag
Explore by tags

AI ModelsSecurity ResearchAI Security Monitoring
JailBench
Details
JailBench is a comprehensive Chinese dataset for assessing jailbreak attack risks on large language models.

Vulnerability ScannersAI Security MonitoringPrompt Injection Defense
Breaker AI
Details
Breaker AI is an open-source CLI tool for security checks on LLM prompts.

DevSecOps ToolsAI Security MonitoringPrompt Injection Defense
Breaker AI
Details
Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

AI ModelsInput Validation & FilteringPrompt Injection Defense
PromptInjectionBench
Details
A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

AI Security MonitoringPrompt Injection Defense
WideOpenAI
Details
Short list of indirect prompt injection attacks for OpenAI-based models.

Input Validation & FilteringAI Security MonitoringPrompt Injection Defense
last_layer
Details
Ultra-fast, low latency LLM security solution for prompt injection and jailbreak detection.