Category
Explore by categories

promptmap
A prompt injection scanner for custom LLM applications.

LLMPromptAttackGuide
A guide for understanding and mitigating prompt attacks on large language models.

PFI
PFI is a system designed to prevent privilege escalation in LLM agents by enforcing trust and tracking data flow.

Breaker AI
Breaker AI is an open-source CLI tool for security checks on LLM prompts.

Breaker AI
Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

ASCII Smuggling Hidden Prompt Injection
A novel approach to hacking AI assistants using Unicode Tags to bypass security measures in large language models.

prompt_injection_research
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.

AIAnytime/Prompt-Injection-Prevention
GitHub repository for techniques to prevent prompt injection in AI chatbots using LLMs.

ai-prompt-ctf
A Agentic LLM CTF to test prompt injection attacks and preventions.

Prompt Injection Playground
A GitHub repository for testing prompt injection techniques and developing defenses against them.

PromptDefender
A multi-layer defence to protect applications against prompt injection attacks.