The Best Your Ultimate AI Security Toolkit
Curated AI security tools & LLM safety resources for cybersecurity professionals
Curated AI security tools & LLM safety resources for cybersecurity professionals
A toolkit demonstrating security vulnerabilities in MCP frameworks through various attack vectors, for educational purposes.
Targeted Adversarial Examples on Speech-to-Text systems.
A CLI that provides a generic automation layer for assessing the security of ML models.
A PyTorch adversarial library for attack and defense methods on images and graphs.
Advbox is a toolbox for generating adversarial examples to test the robustness of neural networks across various frameworks.
A Python toolbox for adversarial robustness research, implemented in PyTorch.
TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
A Python library designed to enhance machine learning security against adversarial threats.
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.
An adversarial example library for constructing attacks, building defenses, and benchmarking both.
AgentFence is an open-source platform for automatically testing AI agent security, identifying vulnerabilities like prompt injection and secret leakage.
A novel approach to hacking AI assistants using Unicode Tags to bypass security measures in large language models.