
Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.

GitHub repository for techniques to prevent prompt injection in AI chatbots using LLMs.

A Agentic LLM CTF to test prompt injection attacks and preventions.

Automatic Prompt Injection testing tool that automates the detection of prompt injection vulnerabilities in AI agents.

A GitHub repository for testing prompt injection techniques and developing defenses against them.

A multi-layer defence to protect applications against prompt injection attacks.

Unofficial implementation of backdooring instruction-tuned LLMs using virtual prompt injection.

GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

Code to generate NeuralExecs for prompt injection attacks tailored for LLMs.

A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.