A playground of experimental prompts, tools & scripts for machine intelligence models.
This project investigates the security of large language models by classifying input prompts to discover malicious ones.
Save your precious prompt from leaking with minimal cost.
A GitHub repository focused on security prompts and code correctness for AI applications.
A system prompt to prevent prompt leakage and adversarial attacks in GPTs.
A GitHub repository for developing adversarial attack techniques using injection prompts.
A GitHub repository for prompt attack-defense, prompt injection, and reverse engineering notes and examples.
Learn about a type of vulnerability that specifically targets machine learning models.
A curated list of prompt engineer commands for exploiting chatbot vulnerabilities.
A unified evaluation framework for large language models.
A dataset containing embeddings for jailbreak prompts used to assess LLM vulnerabilities.