A prompt injection scanner for custom LLM applications.
Chinese safety prompts for evaluating and improving the safety of LLMs.
SecGPT is an Execution Isolation Architecture for securing LLM applications against various types of attacks.
Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.
Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.
A multi-layer defence to protect applications against prompt injection attacks.
Application which investigates defensive measures against prompt injection attacks on LLMs, focusing on external tool exposure.
Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"
This repository provides a benchmark for prompt Injection attacks and defenses.
The automated prompt injection framework for LLM-integrated applications.
Explores security vulnerabilities in ChatGPT plugins, focusing on data exfiltration through markdown injections.