Tag
Explore by tags

Folly
Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.

AIAnytime/Prompt-Injection-Prevention
GitHub repository for techniques to prevent prompt injection in AI chatbots using LLMs.

ai-prompt-ctf
A Agentic LLM CTF to test prompt injection attacks and preventions.

aiapwn
Automatic Prompt Injection testing tool that automates the detection of prompt injection vulnerabilities in AI agents.

Prompt Injection Playground
A GitHub repository for testing prompt injection techniques and developing defenses against them.

PromptDefender
A multi-layer defence to protect applications against prompt injection attacks.

Virtual Prompt Injection
Unofficial implementation of backdooring instruction-tuned LLMs using virtual prompt injection.

JudgeDeceiver
GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

LLM NeuralExec
Code to generate NeuralExecs for prompt injection attacks tailored for LLMs.

PromptInjectionBench
A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

