LogoAISecKit
icon of Vigil

Vigil

Vigil is a security scanner for detecting prompt injections and other risks in Large Language Model inputs.

Introduction

Vigil

⚡ Vigil ⚡ is a Python library and REST API designed to assess Large Language Model (LLM) prompts and responses for potential threats such as prompt injections and jailbreaks. Here are some key features and highlights:

Key Features
  • Modular Scanners: Utilize various scanners like YARA signatures, transformers, and canary tokens for comprehensive analysis.
  • Self-hosting Capabilities: Get started easily with the provided detection signatures and datasets for self-hosting implementation.
  • Integration Options: Run Vigil as a REST API server or integrate it directly into your Python applications.
  • Sample Scan Outputs: Review detailed scan results including matched vulnerabilities and potential prompt injections.
Benefits
  • Enhanced Security: By detecting known vulnerabilities in LLM prompts, Vigil helps in strengthening security against unsolved prompt injection attacks.
  • Research and Development: Currently in its alpha state, it serves as a tool for researchers and developers to build upon prompt security frameworks.
  • Community Insights: Stay informed about ongoing research and implement recommendations from experts in the field.
Highlights
  • Supports both local and OpenAI embeddings for flexibility.
  • Sample workflows and documented APIs make integration straightforward.
  • Continuous updates and community input are encouraged to enhance effectiveness and user support.

Vigil is essential for anyone looking to improve the security posture of LLM applications by actively monitoring and detecting risks associated with prompt inputs.

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates