
All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.

Save your precious prompt from leaking with minimal cost.

A GitHub repository focused on security prompts and code correctness for AI applications.

Protect your GPTs through secure prompts to prevent malicious data leaks.

A GitHub repository for developing adversarial attack techniques using injection prompts.

A unified evaluation framework for large language models.

A prompt injection scanner for custom LLM applications.

A GitHub repository for sharing leaked GPT prompts and tools.

List of free GPTs that doesn’t require plus subscription.

A GitHub repository containing leaked prompts from top-performing GPT models for development and modification.

Leaked GPTs Prompts Bypass the 25 message limit or to try out GPTs without a Plus subscription.

A repository of the top 100 prompts on GPTStore for learning and improving prompt engineering.