AIML & LLM Security & Vulnerability List (2023-2024)
This project serves as a comprehensive resource for understanding the security risks and vulnerabilities associated with Artificial Intelligence (AI), Machine Learning (ML), and Large Language Models (LLM). It highlights the differences between security threats posed by traditional ML systems versus the more complex environments of LLMs.
Key Features:
- Vulnerability List: Documentation of AI Security and Vulnerability Assessment Testing (VAPT) specifically focused on ML and LLM risks for 2023 and 2024.
- Structured Overview: Organized information intended for security analysts, making it easier to navigate the various risks and vulnerabilities.
- Community Contributions: Open for contributions, allowing experts to add new attack types, documentation, and defense strategies.
Benefits:
- Stay Updated: Regular updates to ensure that you understand the latest risks and defenses associated with AI technologies.
- Enhanced Understanding: Provides foundational resources to comprehend security concepts and attack surfaces in AI systems.
- Real-World Applications: Draws connections between theoretical risks and practical implications in security analysis.
By utilizing this resource, security professionals and researchers can gain a deeper understanding of the evolving landscape of AI security.