
Open-source tool by AIShield for AI model insights and vulnerability scans, securing the AI supply chain.

A comprehensive security checklist for MCP-based AI tools to safeguard LLM plugin ecosystems.

JailBench is a comprehensive Chinese dataset for assessing jailbreak attack risks in large language models.

The Python Risk Identification Tool for generative AI (PyRIT) helps identify risks in generative AI systems.

A resource page for OWASP's Top 10 for LLM & Generative AI Security.

An overview of the top 10 security issues in machine learning systems by OWASP.

Discover the OWASP Top 10 security risks for Large Language Models and Generative AI, with expert guidance and best practices.

A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.