
A Python library designed to enhance machine learning security against adversarial threats.

An adversarial example library for constructing attacks, building defenses, and benchmarking both.

AgentFence is an open-source platform for automatically testing AI agent security, identifying vulnerabilities like prompt injection and secret leakage.

sqlmap is a powerful tool for detecting and exploiting SQL injection flaws in web applications.

GitHub repository for techniques to prevent prompt injection in AI chatbots using LLMs.

A Agentic LLM CTF to test prompt injection attacks and preventions.

Automatic Prompt Injection testing tool that automates the detection of prompt injection vulnerabilities in AI agents.

Unofficial implementation of backdooring instruction-tuned LLMs using virtual prompt injection.

GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

Discover the leaked system instructions and prompts for ChatGPT's custom GPT plugins.

Easy to use LLM Prompt Injection Detection / Detector Python Package.