This repository contains the official code for the paper on prompt injection and parameterization.
Implementation of the PromptCARE framework for watermark injection and verification for copyright protection.
Official implementation of StruQ, which defends against prompt injection attacks using structured queries.
The official implementation of InjecGuard, a tool for benchmarking and mitigating over-defense in prompt injection guardrail models.
Project Mantis is a tool designed to counter LLM-driven cyberattacks using prompt injection techniques.
A steganography tool for encoding images as prompt injections for AIs with vision capabilities.
A practical guide to LLM hacking covering fundamentals, prompt injection, offense, and defense.
The automated prompt injection framework for LLM-integrated applications.
LLM Prompt Injection Detector designed to protect AI applications from prompt injection attacks.
A collection of GPT system prompts and various prompt injection/leaking knowledge.
Mush Audit is an AI-powered smart contract security analysis platform utilizing multiple AI models for thorough blockchain audits.
Open-source tool by AIShield for AI model insights and vulnerability scans, securing the AI supply chain.