Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
Collection of prompt injections used in the Giskard Scanner.
A prompt injection scanner for custom LLM applications.
A guide for understanding and mitigating prompt attacks on large language models.
PFI is a system designed to prevent privilege escalation in LLM agents by enforcing trust and tracking data flow.
The Giskard-AI prompt injections repository provides a curated collection of prompt injections that can be utilized with the Giskard Scanner. This repository is particularly useful for developers and researchers working in AI security, offering a consolidated source of prompt injection techniques.