LogoAISecKit
icon of vigil-jailbreak-ada-002

vigil-jailbreak-ada-002

A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.

Introduction

Vigil Jailbreak ADA 002 Dataset

The Vigil Jailbreak ADA 002 dataset is a collection of text embeddings specifically designed for evaluating Large Language Model (LLM) prompts against various scanners. This dataset is part of the Vigil framework, which aims to detect prompt injections, jailbreaks, and other risky inputs that could compromise the integrity of AI models.

Key Features:
  • Embeddings: Contains text-embedding-ada-002 embeddings for all jailbreak prompts.
  • Utility: Includes a utility script (parquet2vdb.py) for loading embeddings into the Vigil chromadb instance.
  • Open Source: Contributes to the open-source community by providing tools for assessing AI model vulnerabilities.
Benefits:
  • Enhanced Security: Helps in identifying and mitigating risks associated with LLM prompts.
  • Research and Development: A valuable resource for researchers and developers working on AI safety and security.
  • Community Contribution: Supports the advancement of AI through open science and collaboration.
Highlights:
  • Framework Integration: Easily integrates with the Vigil framework for comprehensive LLM assessments.
  • Source of Prompts: Jailbreak prompts are sourced from a reputable GitHub repository, ensuring quality and relevance.

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates