The Best Your Ultimate AI Security Toolkit
Curated AI security tools & LLM safety resources for cybersecurity professionals
Curated AI security tools & LLM safety resources for cybersecurity professionals
This repository provides updates on the status of jailbreaking the OpenAI GPT language model.
ChatGPT DAN is a GitHub repository for jailbreak prompts that allow ChatGPT to bypass restrictions.
A collection of state-of-the-art jailbreak methods for LLMs, including papers, codes, datasets, and analyses.
A powerful tool for automated LLM fuzzing to identify and mitigate potential jailbreaks in LLM APIs.
A dataset of 15,140 ChatGPT prompts, including 1,405 jailbreak prompts, collected from various platforms for research purposes.
A collection of harmless liberation prompts designed for AI models.
A Reddit community focused on sharing and discussing jailbreak techniques for ChatGPT models.
A comprehensive guide on prompt engineering and jailbreak techniques for AI models.
A GitHub repository for contributing to the development of jailbreaks for AI models.
A Reddit community focused on sharing and discussing jailbreak techniques for AI models like ChatGPT.