Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

Nano Bananary is an AI batch image and video generator with 142 effects.

A topic-centric list of HQ open datasets for various fields and applications.
PromptInjectionBench is a comprehensive repository designed for analyzing prompt injection attacks on various AI models, including OpenAI's GPT-4 and Gemini Pro. The repository automates the benchmarking process using Python, allowing users to send prompts from the Hugging Face Jailbreak dataset to different language models, collecting and tabulating results systematically.
The project actively monitors changes in model behavior over time, providing insights into how different language models handle malicious prompts. Users can easily run their analysis by setting up the environment with required API keys and simply invoking a few commands.