The Best Your Ultimate AI Security Toolkit
Curated AI security tools & LLM safety resources for cybersecurity professionals
Curated AI security tools & LLM safety resources for cybersecurity professionals
A comprehensive overview of prompt injection vulnerabilities and potential solutions in AI applications.
Explore Prompt Injection Attacks on AI Tools such as ChatGPT with techniques and mitigation strategies.
A blog discussing prompt injection vulnerabilities in large language models (LLMs) and their implications.
Explores security vulnerabilities in ChatGPT plugins, focusing on data exfiltration through markdown injections.
A resource for understanding prompt injection vulnerabilities in AI, including techniques and real-world examples.
Mush Audit is an AI-powered smart contract security analysis platform utilizing multiple AI models for thorough blockchain audits.
Open-source tool by AIShield for AI model insights and vulnerability scans, securing the AI supply chain.
Integration that connects BloodHound with AI through Model Context Protocol for analyzing Active Directory attack paths.
A plug-and-play AI red teaming toolkit to simulate adversarial attacks on machine learning models.
A comprehensive security checklist for MCP-based AI tools to safeguard LLM plugin ecosystems.
JailBench is a comprehensive Chinese dataset for assessing jailbreak attack risks in large language models.
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.