
JailBench is a comprehensive Chinese dataset for assessing jailbreak attack risks on large language models.

Secure and local AI on your desktop with a built-in RAG knowledge base and Markdown note support.

A guide for understanding and mitigating prompt attacks on large language models.

A GitHub repository containing system prompts, tools, and AI models for various applications.

SecGPT is an Execution Isolation Architecture for securing LLM applications against various types of attacks.

A security scanner for your LLM agentic workflows.

Open-source framework for evaluating and testing AI and LLM systems for performance, bias, and security issues.

Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.

A fun POC that is built to understand AI security agents.

An open-source vulnerability scanner for AI systems, focusing on safeguarding LLMs against various attacks.