
Framework for testing vulnerabilities of large language models (LLM).

Breaker AI is a CLI tool that detects prompt injection risks and vulnerabilities in AI prompts.

Framework for testing vulnerabilities of large language models (LLM).

A toolkit demonstrating security vulnerabilities in MCP frameworks through various attack vectors, for educational purposes.

Targeted Adversarial Examples on Speech-to-Text systems.

A CLI that provides a generic automation layer for assessing the security of ML models.

A Python library designed to enhance machine learning security against adversarial threats.

AgentFence is an open-source platform for automatically testing AI agent security, identifying vulnerabilities like prompt injection and secret leakage.

GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

Guard your LangChain applications against prompt injection with Lakera ChainGuard.