Overview
The audio_adversarial_examples
repository provides code for generating targeted adversarial examples specifically designed to deceive speech-to-text systems. This project is based on the research paper "Audio Adversarial Examples: Targeted Attacks on Speech-to-Text" by Nicholas Carlini and David Wagner.
Key Features
- Targeted Attacks: Generate audio inputs that can mislead speech recognition systems into producing incorrect transcriptions.
- Compatibility: Works with TensorFlow 1.15.4 and DeepSpeech 0.9.3, with options for earlier versions.
- Docker Support: Easily deployable using Docker, with GPU support for enhanced performance.
Benefits
- Research Utility: Useful for researchers studying the robustness of speech-to-text systems against adversarial attacks.
- Practical Application: Provides a framework for testing and improving the security of audio processing systems.
Highlights
- Instructions for generating adversarial examples are included, along with verification steps to ensure the attack's success.
- The repository is actively maintained and includes contributions from multiple developers.