promptmap
promptmap is a vulnerability scanning tool designed to automatically test prompt injection attacks on custom LLM applications. It combines static and dynamic analysis to effectively detect potential vulnerabilities in LLM systems by analyzing prompts and sending malicious inputs to assess their responses.
Key Features:
- Multi-model Support: Compatible with OpenAI, Anthropic, and local models via Ollama.
- Customizable Test Rules: Users can create and modify test rules in YAML format to suit specific needs.
- Iterative Testing: Ability to specify the number of test iterations to uncover vulnerabilities that may only appear after multiple attempts.
- JSON Output: Comprehensive results are saved in JSON format for easier analysis.
- Firewall Testing Mode: Assess the effectiveness of firewall LLMs in combating malicious prompts.
Benefits:
- Enhances the security of LLM applications by proactively identifying vulnerabilities.
- Saves time and resources by automating the vulnerability testing process.
- Increases the robustness of AI applications against prompt injection attacks through continuous testing and adaptation.
Highlights:
- Originally released in 2022, promptmap has been completely rewritten in 2025 with enhanced features.
- It is open source and licensed under the GPL-3.0 License, encouraging community contributions and collaboration.


