Official implementation of StruQ, which defends against prompt injection attacks using structured queries.
This project investigates the security of large language models by classifying prompts to discover malicious injections.
The official implementation of a pre-print paper on prompt injection attacks against large language models.
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
A practical guide to LLM hacking covering fundamentals, prompt injection, offense, and defense.
A prompt injection scanner for custom LLM applications that tests vulnerabilities in LLM systems.
LLM Prompt Injection Detector designed to protect AI applications from prompt injection attacks.
This paper discusses new methods for generating transferable adversarial attacks on aligned language models, improving LLM security.
Context7 provides up-to-date documentation and resources for LLMs and AI code editors.
Discover, download, and run local LLMs like Llama and DeepSeek on your computer easily.