The Best Your Ultimate AI Security Toolkit
Curated AI security tools & LLM safety resources for cybersecurity professionals
Curated AI security tools & LLM safety resources for cybersecurity professionals

Discover the leaked system instructions and prompts for ChatGPT's custom GPT plugins.

A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

A security advisory on Fermax Intercom DTML Injection vulnerability that allows unauthorized access through DTMF tones.

Guard your LangChain applications against prompt injection with Lakera ChainGuard.

A collection of prompt injection mitigation techniques.

Official GitHub repository assessing prompt injection risks in user-designed GPTs.

Easy to use LLM Prompt Injection Detection / Detector Python Package.

Application which investigates defensive measures against prompt injection attacks on LLMs, focusing on external tool exposure.

Short list of indirect prompt injection attacks for OpenAI-based models.

Manual Prompt Injection / Red Teaming Tool for large language models.

Fine-tuning base models to create robust task-specific models for better performance.

This repository contains the official code for the paper on prompt injection and parameterization.