The Best Your Ultimate AI Security Toolkit
Curated AI security tools & LLM safety resources for cybersecurity professionals
Curated AI security tools & LLM safety resources for cybersecurity professionals
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.
sqlmap is a powerful tool for detecting and exploiting SQL injection flaws in web applications.
A controllable SONAR image generation framework utilizing text-to-image diffusion and GPT prompting for enhanced diversity and realism.
Open-source LLM Prompt-Injection and Jailbreaking Playground for evaluating LLM security vulnerabilities.
GitHub repository for techniques to prevent prompt injection in AI chatbots using LLMs.
A Agentic LLM CTF to test prompt injection attacks and preventions.
Automatic Prompt Injection testing tool that automates the detection of prompt injection vulnerabilities in AI agents.
A GitHub repository for testing prompt injection techniques and developing defenses against them.
A multi-layer defence to protect applications against prompt injection attacks.
Unofficial implementation of backdooring instruction-tuned LLMs using virtual prompt injection.