Category
Explore by categories

Awesome_GPT_Super_Prompting
Explore ChatGPT jailbreaks, prompt leaks, injection techniques, and tools focused on LLM security and prompt engineering.

chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.

Universal and Transferable Adversarial Attacks on Aligned Language Models
This paper discusses new methods for generating transferable adversarial attacks on aligned language models, improving LLM security.

Prompt Injection Cheat Sheet
Explore Prompt Injection Attacks on AI Tools such as ChatGPT with techniques and mitigation strategies.

Simon Willison’s Weblog
A blog discussing prompt injection vulnerabilities in large language models (LLMs) and their implications.

Embrace The Red
Explores security vulnerabilities in ChatGPT plugins, focusing on data exfiltration through markdown injections.

Learn Prompting
A resource for understanding prompt injection vulnerabilities in AI, including techniques and real-world examples.

AIShield Watchtower
Open-source tool by AIShield for AI model insights and vulnerability scans, securing the AI supply chain.

AI Security Toolkit
A plug-and-play AI red teaming toolkit to simulate adversarial attacks on machine learning models.

MCP-Security-Checklist
A comprehensive security checklist for MCP-based AI tools to safeguard LLM plugin ecosystems.

JailBench
JailBench is a comprehensive Chinese dataset for assessing jailbreak attack risks in large language models.

AIPromptJailbreakPractice
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.
