A prompt injection scanner for custom LLM applications.
A guide for understanding and mitigating prompt attacks on large language models.
A LLM CTF Challenge designed to teach prompt injection in multi-chain LLM applications.
PFI is a system designed to prevent privilege escalation in LLM agents by enforcing trust and tracking data flow.
Prompt Injections Everywhere: A GitHub repository providing techniques for prompt injection attacks.
A curated list of useful resources that cover Offensive AI.
Breaker AI is an open-source CLI tool for security checks on LLM prompts.
Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.
Framework for testing vulnerabilities of large language models (LLM).