A comprehensive overview of prompt injection vulnerabilities and potential solutions in AI applications.
Explore Prompt Injection Attacks on AI Tools such as ChatGPT with techniques and mitigation strategies.
A blog discussing prompt injection vulnerabilities in large language models (LLMs) and their implications.
Explores security vulnerabilities in ChatGPT plugins, focusing on data exfiltration through markdown injections.
A resource for understanding prompt injection vulnerabilities in AI, including techniques and real-world examples.
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.
This project investigates the security of large language models by classifying input prompts to discover malicious ones.
Save your precious prompt from leaking with minimal cost.
Protect your GPTs through secure prompts to prevent malicious data leaks.
A prompt injection scanner for custom LLM applications.
A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.