LogoAISecKit
icon of WideOpenAI

WideOpenAI

Short list of indirect prompt injection attacks for OpenAI-based models.

Introduction

WideOpenAI

WideOpenAI is a repository that provides a list of indirect prompt injection attacks for OpenAI-based models. It serves as a resource for educational purposes with a focus on demonstrating jailbreak prompts using various query syntaxes like SQL and Splunk.

Key Features:

  • Educational Resource: Primarily designed for educational use to illustrate vulnerabilities in LLMs (Large Language Models).
  • Indirect Prompt Injection Attacks: Includes a varied list of query-based attacks that can induce LLMs to bypass standard ethical guidelines.
  • Testing Insights: Shares effective querying tips and detailed examples based on extensive testing with tools like Azure OpenAI and Microsoft Copilot.

Benefits:

  • Awareness and Understanding: Aims to inform users and developers about potential risks and behaviors of OpenAI-based models under specific querying conditions.
  • Community Contribution: Inspired by initiatives from the AI security community, fostering ongoing development and study.

Highlights:

  • The repository is aptly named "WideOpenAI" to reflect its focus on exposing potential attack vectors on OpenAI APIs.
  • Users can experiment with creating their own prompts and learning from documented examples and configurations.

Information

  • Publisher
    AISecKit
  • Websitegithub.com
  • Published date2025/04/27

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates