This repository provides a benchmark for prompt Injection attacks and defenses.
An open-source toolkit for monitoring Large Language Models (LLMs) with features like text quality and sentiment analysis.
The Security Toolkit for LLM Interactions, ensuring safe and secure interactions with Large Language Models.
Benchmark various LLM Structured Output frameworks on tasks like multi-label classification and named entity recognition.
Open-source LLM Prompt-Injection and Jailbreaking Playground for testing LLM security vulnerabilities.
Open-source tool for decompiling binary code into C using large language models.
Cybersecurity AI (CAI) is an open Bug Bounty-ready Artificial Intelligence framework for enhancing security operations.
TG-FF is a Telegram resource management tool that allows users to bypass media saving restrictions and download protected content.
Learn AI and LLMs from scratch using free resources.
The AI Red Team Platform.
A simple and modular tool to evaluate and red-team any LLM application.
可视化Ansible Web管理面板,提供批量主机管理、命令执行、文件传输和Web终端等功能。