
Application which investigates defensive measures against prompt injection attacks on LLMs, focusing on external tool exposure.

Fine-tuning base models to create robust task-specific models for better performance.

This repository contains the official code for the paper on prompt injection and parameterization.

Implementation of the PromptCARE framework for watermark injection and verification for copyright protection.

A writeup for the Gandalf prompt injection game.

Uses the ChatGPT model to filter out potentially dangerous user-supplied questions.

Custom node for ComfyUI enabling specific prompt injections within Stable Diffusion UNet blocks.

A collection of GPT system prompts and various prompt injection/leaking knowledge.

A comprehensive overview of prompt injection vulnerabilities and potential solutions in AI applications.

Integration that connects BloodHound with AI through Model Context Protocol for analyzing Active Directory attack paths.

A GitHub repository containing system prompts, tools, and AI models for various applications.

Awesome curated collection of GPT-4o images & prompts. Explore diverse AI-generated art styles from OpenAI's latest model.