Adversarial Reinforcement Learning
This GitHub repository hosts a curated reading list focused on adversarial perspectives in deep reinforcement learning (DRL). The readings encompass key topics such as adversarial attacks on DRL policies, adversarial training techniques, and methodologies for robustness in decision-making under adversarial conditions.
Key Features
- Collection of important research papers and resources on adversarial reinforcement learning.
- Exploration of adversarial attacks and their implications for deep learning policies.
- Insights into robust decision-making through adversarial training and state detection algorithms.
Benefits
- Provides researchers and practitioners in AI with a comprehensive list of references to deepen their understanding of adversarial threats and defenses in DRL.
- Helps in the identification and study of vulnerabilities in existing DRL frameworks.
- Aids development of more robust and efficient DRL systems by guiding through important research findings.
Highlights
- Includes key papers from esteemed conferences like ICLR, ICML, and AAAI, ranging from 2017 to 2025.
- Covers both foundational and cutting-edge advancements in the field of adversarial reinforcement learning.



