
A prompt injection scanner for custom LLM applications.

A guide for understanding and mitigating prompt attacks on large language models.

PFI is a system designed to prevent privilege escalation in LLM agents by enforcing trust and tracking data flow.

Breaker AI is an open-source CLI tool for security checks on LLM prompts.

Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

A novel approach to hacking AI assistants using Unicode Tags to bypass security measures in large language models.