Every practical and proposed defense against prompt injection.
Explore Prompt Injection Attacks on AI Tools such as ChatGPT with techniques and mitigation strategies.
A blog discussing prompt injection vulnerabilities in large language models (LLMs) and their implications.
Open-source LLM Vulnerability Scanner for safe and reliable AI.
AI Prompt Generator and Optimizer that enhances prompt engineering for various AI applications.
Aims to educate about security risks in deploying Large Language Models (LLMs).
A resource page for OWASP's Top 10 for LLM & Generative AI Security.
Discover the OWASP Top 10 security risks for Large Language Models and Generative AI, with expert guidance and best practices.
A project focused on advancing security for generative AI technologies through collaboration and guidelines.
A GitHub repository for prompt attack-defense, prompt injection, and reverse engineering notes and examples.
A curated list of prompt engineer commands for exploiting chatbot vulnerabilities.