
A collection of examples for exploiting chatbot vulnerabilities using injections and encoding techniques.

Explore Prompt Injection Attacks on AI Tools such as ChatGPT with techniques and mitigation strategies.

A plug-and-play AI red teaming toolkit to simulate adversarial attacks on machine learning models.

the LLM vulnerability scanner that checks for weaknesses in large language models.

A list of useful payloads and bypass for Web Application Security and Pentest/CTF.

A blog featuring insights on offensive security, technical advisories, and research by Bishop Fox.

A repository for exploring prompt injection techniques and defenses.

A curated list of prompt engineer commands for exploiting chatbot vulnerabilities.

A powerful tool for automated LLM fuzzing to identify and mitigate potential jailbreaks in LLM APIs.

A comprehensive guide on prompt engineering and jailbreak techniques for AI models.