
A dataset of jailbreak-related prompts for ChatGPT, aiding in understanding and generating text in this context.

Dataset for classifying prompts as jailbreak or benign to enhance LLM safety.

A collection of state-of-the-art jailbreak methods for LLMs, including papers, codes, datasets, and analyses.

A powerful tool for automated LLM fuzzing to identify and mitigate potential jailbreaks in LLM APIs.

A dataset of 15,140 ChatGPT prompts, including 1,405 jailbreak prompts, collected from various platforms for research purposes.