
Protect AI focuses on securing machine learning and AI applications with various open-source tools.

A comprehensive platform for AI tools, security resources, and ethical guidelines.

A GitHub repository containing prompts for generating text related to cyber security using ChatGPT.

All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.

Save your precious prompt from leaking with minimal cost.

A GitHub repository focused on security prompts and code correctness for AI applications.

A system prompt to prevent prompt leakage and adversarial attacks in GPTs.

Protect your GPTs through secure prompts to prevent malicious data leaks.

Learn about a type of vulnerability that specifically targets machine learning models.

A prompt injection scanner for custom LLM applications.

A dataset of jailbreak-related prompts for ChatGPT, aiding in understanding and generating text in this context.

Dataset for classifying prompts as jailbreak or benign to enhance LLM safety.