All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.
A GitHub repository for developing adversarial attack techniques using injection prompts.
A GitHub repository for prompt attack-defense, prompt injection, and reverse engineering notes and examples.
A unified evaluation framework for large language models.
A comprehensive repository of 1000+ GPTs categorized into 10 categories with 80+ leaked prompts.
A GitHub repository for sharing leaked GPT prompts and tools.
List of free GPTs that doesn’t require plus subscription.
A GitHub repository containing leaked prompts from top-performing GPT models for development and modification.
Leaked GPTs Prompts Bypass the 25 message limit or to try out GPTs without a Plus subscription.
A repository of the top 100 prompts on GPTStore for learning and improving prompt engineering.
A collection of prompts for GPTS, categorized for various applications like writing, marketing, and education.
A repository collecting leaked prompts of various GPT models for analysis and development.