Category
Explore by categories

Vulnerability ScannersAI Security MonitoringPrompt Injection Defense
promptmap
Details
A prompt injection scanner for custom LLM applications.

AI ModelsAI Security MonitoringPrompt Injection Defense
vigil-jailbreak-ada-002
Details
A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.

AI ModelsAI Application PlatformsAI Security Monitoring
rubend18/ChatGPT-Jailbreak-Prompts
Details
A dataset of jailbreak-related prompts for ChatGPT, aiding in understanding and generating text in this context.

Model Backdoor DefenseAI Security MonitoringJailbreak Prevention
Awesome-Jailbreak-on-LLMs
Details
A collection of state-of-the-art jailbreak methods for LLMs, including papers, codes, datasets, and analyses.

Penetration TestingVulnerability ScannersAI Security Monitoring
FuzzyAI
Details
A powerful tool for automated LLM fuzzing to identify and mitigate potential jailbreaks in LLM APIs.
