LogoAISecKit
icon of Rebuff

Rebuff

LLM Prompt Injection Detector designed to protect AI applications from prompt injection attacks.

Introduction

Rebuff: LLM Prompt Injection Detector

Rebuff is an LLM Prompt Injection Detector developed to protect AI applications from prompt injection (PI) attacks through a multi-layered defense. It is designed as a prototype and offers several key features:

Key Features:
  • Prompt Injection Detection: Identifies potential prompt injection attacks using heuristics and LLM-based detection.
  • Canary Word Leak Detection: Adds canary tokens to prompts, enabling detection of leakage and enhancing security.
  • Self-hosting: Provides options for users to self-host the detector, offering control and flexibility in deployment.
  • Integration Support: Compatible with cloud services like OpenAI and Pinecone for seamless integration.
Benefits:
  • Multi-layered Defense: Four layers of defense to ensure enhanced security against prompt injection attacks.
  • Open Source: Contributions from the community are welcome, fostering collaboration and improvement.
  • User-defined Strategies: Customize detection strategies to meet specific needs.
Highlights:
  • Written in TypeScript and Python for compatibility across platforms.
  • Supports flexible deployment options, allowing users to tailor their experience based on requirements.
  • Encourages community involvement through contributions and feedback.

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates