LogoAISecKit
icon of Foolbox

Foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.

Introduction

Foolbox: A Python Toolbox for Adversarial Attacks

Foolbox is a powerful library designed for creating adversarial examples that challenge the robustness of neural networks. Built on top of EagerPy, it supports models from popular frameworks like PyTorch, TensorFlow, and JAX without code duplication.

Key Features:
  • Robustness Benchmarking: Run adversarial attacks effortlessly to evaluate the resilience of machine learning models.
  • State-of-the-Art Attacks: Includes numerous gradient-based and decision-based attacks to test various models effectively.
  • Compatibility: Works with multiple frameworks (PyTorch, TensorFlow, JAX) ensuring flexibility and easy integration.
  • High Performance: Optimized for speed and efficiency with impressive performance compared to earlier versions.
  • Easy Installation: Install via pip with simple commands while not enforcing dependencies for unused frameworks.
Benefits:
  • Quickly benchmark the strength of your models against adversarial attacks.
  • Explore a well-documented API and numerous examples for guidance.
  • Contribute to advancing adversarial methods with an active community.

Utilize Foolbox for groundbreaking research and development in model robustness!

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates