Curated AI security tools & LLM safety resources for cybersecurity professionals
Subscribe to our newsletter for the latest news and updates
An introductory class on understanding AI security risks and mitigation strategies.
The most comprehensive prompt hacking course available, focusing on prompt engineering and security.
A benchmark for prompt injection detection systems, providing a neutral way to evaluate their performance.
This repository provides a benchmark for prompt Injection attacks and defenses.
Every practical and proposed defense against prompt injection.
An open-source toolkit for monitoring Large Language Models (LLMs) with features like text quality and sentiment analysis.
The Security Toolkit for LLM Interactions, ensuring safe and secure interactions with Large Language Models.
Benchmark various LLM Structured Output frameworks on tasks like multi-label classification and named entity recognition.
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
Open-source LLM Prompt-Injection and Jailbreaking Playground for testing LLM security vulnerabilities.
Code scanner to check for issues in prompts and LLM calls
Open-source tool for decompiling binary code into C using large language models.