Breaker AI is an open-source CLI tool for security checks on LLM prompts.
AIHTTPAnalyzer enhances web application security testing by integrating AI capabilities into Burp Suite.
Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.
Targeted Adversarial Examples on Speech-to-Text systems.
A CLI that provides a generic automation layer for assessing the security of ML models.
A PyTorch adversarial library for attack and defense methods on images and graphs.
Advbox is a toolbox for generating adversarial examples to test the robustness of neural networks across various frameworks.
A Python toolbox for adversarial robustness research, implemented in PyTorch.
TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
A Python library designed to enhance machine learning security against adversarial threats.
An adversarial example library for constructing attacks, building defenses, and benchmarking both.
AgentFence is an open-source platform for automatically testing AI agent security, identifying vulnerabilities like prompt injection and secret leakage.