Tag
Explore by tags

Lakera ChainGuard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.

prompt-injection-mitigations
A collection of prompt injection mitigation techniques.

prompt-injection
Official GitHub repository assessing prompt injection risks in user-designed GPTs.

Pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package.

SpyLogic
Application which investigates defensive measures against prompt injection attacks on LLMs, focusing on external tool exposure.

WideOpenAI
Short list of indirect prompt injection attacks for OpenAI-based models.

deck-of-many-prompts
Manual Prompt Injection / Red Teaming Tool for large language models.

prompt-injection-defense
Fine-tuning base models to create robust task-specific models for better performance.

Fixed Input Parameterization
This repository contains the official code for the paper on prompt injection and parameterization.

StruQ
Official implementation of StruQ, which defends against prompt injection attacks using structured queries.

InjecGuard
The official implementation of InjecGuard, a tool for benchmarking and mitigating over-defense in prompt injection guardrail models.

gandalf-prompt-injection-writeup
A writeup for the Gandalf prompt injection game.