Implementation of the PromptCARE framework for watermark injection and verification for copyright protection.
Official implementation of StruQ, which defends against prompt injection attacks using structured queries.
A writeup for the Gandalf prompt injection game.
The official implementation of a pre-print paper on prompt injection attacks against large language models.
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
A GitHub repository containing resources on prompt attack-defense and reverse engineering techniques.
Explore ChatGPT jailbreaks, prompt leaks, injection techniques, and tools focused on LLM security and prompt engineering.
A collection of GPT system prompts and various prompt injection/leaking knowledge.
A comprehensive overview of prompt injection vulnerabilities and potential solutions in AI applications.
Mush Audit is an AI-powered smart contract security analysis platform utilizing multiple AI models for thorough blockchain audits.
A GitHub repository containing system prompts, tools, and AI models for various applications.
Awesome curated collection of GPT-4o images & prompts. Explore diverse AI-generated art styles from OpenAI's latest model.