Category
Explore by categories

Vulnerability ScannersPrompt Injection Defense
promptmap
Details
A prompt injection scanner for custom LLM applications.

Security ResearchAI Security MonitoringPrompt Injection Defense
LLMPromptAttackGuide
Details
A guide for understanding and mitigating prompt attacks on large language models.

Security ResearchAI Security MonitoringPrompt Injection Defense
PFI
Details
PFI is a system designed to prevent privilege escalation in LLM agents by enforcing trust and tracking data flow.

Vulnerability ScannersAI Security MonitoringPrompt Injection Defense
Breaker AI
Details
Breaker AI is an open-source CLI tool for security checks on LLM prompts.

DevSecOps ToolsAI Security MonitoringPrompt Injection Defense
Breaker AI
Details
Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

Vulnerability DisclosureAI Security MonitoringPrompt Injection Defense
ASCII Smuggling Hidden Prompt Injection
Details
A novel approach to hacking AI assistants using Unicode Tags to bypass security measures in large language models.
