Category
Explore by categories

ASCII Smuggling Hidden Prompt Injection
A novel approach to hacking AI assistants using Unicode Tags to bypass security measures in large language models.

AIAnytime/Prompt-Injection-Prevention
GitHub repository for techniques to prevent prompt injection in AI chatbots using LLMs.

aiapwn
Automatic Prompt Injection testing tool that automates the detection of prompt injection vulnerabilities in AI agents.

Prompt Injection Playground
A GitHub repository for testing prompt injection techniques and developing defenses against them.

prompt-injection
Official GitHub repository assessing prompt injection risks in user-designed GPTs.

Pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package.

SpyLogic
Application which investigates defensive measures against prompt injection attacks on LLMs, focusing on external tool exposure.

WideOpenAI
Short list of indirect prompt injection attacks for OpenAI-based models.

deck-of-many-prompts
Manual Prompt Injection / Red Teaming Tool for large language models.

PromptCARE
Implementation of the PromptCARE framework for watermark injection and verification for copyright protection.

llm-security-prompt-injection
This project investigates the security of large language models by classifying prompts to discover malicious injections.
