The automated prompt injection framework for LLM-integrated applications.
Explores security vulnerabilities in ChatGPT plugins, focusing on data exfiltration through markdown injections.
Aims to educate about security risks in deploying Large Language Models (LLMs).
This project investigates the security of large language models by classifying input prompts to discover malicious ones.
All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.
A powerful tool for automated LLM fuzzing to identify and mitigate potential jailbreaks in LLM APIs.