Breaker AI
Breaker AI is an open-source CLI tool designed to proactively stress-test AI prompts for security vulnerabilities. With the rise of AI, ensuring the security of language models (LLMs) is paramount. Breaker AI helps security teams, developers, and AI researchers validate and harden their prompts against possible exploitation.
Key Features
- Jailbreak Resistance Testing: Automatically runs your system prompts against real-world jailbreak attempts to evaluate vulnerabilities.
- Prompt Injection Detection: Scans prompts for risky patterns, unsafe structures, and potential manipulation of user input.
- Customizable Rules & Thresholds: Users can set minimum expected scores and define their own security criteria.
- Clear Reports: Outputs results in human-readable tables or machine-readable JSON, making it easy to understand and analyze.
- Utility Functions: Includes masking sensitive words in text or files to enhance security.
Benefits
- Proactive Security: Identify and mitigate potential threats before they can be exploited.
- Ease of Use: Install globally via npm or use instantly without installation.
- Community-Driven Development: Open for contributions and feature requests, ensuring continuous improvement based on real-world needs.
Highlights
- Supports testing against various AI models and configurations (OpenAI/Claude/Mistral).
- Enables CI/CD integration for automated security checks in workflows.
- Committed to open-source principles with a focus on continuous development and user feedback.