
All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.

Save your precious prompt from leaking with minimal cost.

A system prompt to prevent prompt leakage and adversarial attacks in GPTs.

A GitHub repository for developing adversarial attack techniques using injection prompts.

A GitHub repository for prompt attack-defense, prompt injection, and reverse engineering notes and examples.

A repository for exploring prompt injection techniques and defenses.

Learn about a type of vulnerability that specifically targets machine learning models.

A curated list of prompt engineer commands for exploiting chatbot vulnerabilities.

A prompt injection scanner for custom LLM applications.

A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.

A dataset of jailbreak-related prompts for ChatGPT, aiding in understanding and generating text in this context.