
A powerful tool for automated LLM fuzzing to identify and mitigate potential jailbreaks in LLM APIs.

A dataset of 15,140 ChatGPT prompts, including 1,405 jailbreak prompts, collected from various platforms for research purposes.

A Reddit community focused on sharing and discussing jailbreak techniques for ChatGPT models.

A comprehensive guide on prompt engineering and jailbreak techniques for AI models.

A GitHub repository for contributing to the development of jailbreaks for AI models.