A LLM CTF Challenge designed to teach prompt injection in multi-chain LLM applications.
PFI is a system designed to prevent privilege escalation in LLM agents by enforcing trust and tracking data flow.
Prompt Injections Everywhere: A GitHub repository providing techniques for prompt injection attacks.
A curated list of useful resources that cover Offensive AI.
Breaker AI is an open-source CLI tool for security checks on LLM prompts.
Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.
Framework for testing vulnerabilities of large language models (LLM).
AgentFence is an open-source platform for automatically testing AI agent security, identifying vulnerabilities like prompt injection and secret leakage.
A novel approach to hacking AI assistants using Unicode Tags to bypass security measures in large language models.
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.