Curated AI security tools & LLM safety resources for cybersecurity professionals
Subscribe to our newsletter for the latest news and updates
Curated + custom prompt injections for AI models, focusing on security and exploit development.
Explore prompt injection techniques in large language models (LLMs), providing examples to improve LLM security and robustness.
An introductory class on understanding AI security risks and mitigation strategies.
The most comprehensive prompt hacking course available, focusing on prompt engineering and security.
A benchmark for prompt injection detection systems, providing a neutral way to evaluate their performance.
This repository provides a benchmark for prompt Injection attacks and defenses.
Every practical and proposed defense against prompt injection.
An open-source toolkit for monitoring Large Language Models (LLMs) with features like text quality and sentiment analysis.
The Security Toolkit for LLM Interactions, ensuring safe and secure interactions with Large Language Models.
Benchmark various LLM Structured Output frameworks on tasks like multi-label classification and named entity recognition.
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
Open-source LLM Prompt-Injection and Jailbreaking Playground for testing LLM security vulnerabilities.