Prompt Injection Primer for Engineers—a comprehensive guide to understanding and mitigating prompt injection vulnerabilities.
This paper discusses new methods for generating transferable adversarial attacks on aligned language models, improving LLM security.
Mush Audit is an AI-powered smart contract security analysis platform utilizing multiple AI models for thorough blockchain audits.
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.
A security verification tool by Vercel that checks browser settings for continued access.
A tool for optimizing prompts across various AI applications and security domains.
A comprehensive platform for AI and security tools, resources, and services.
the LLM vulnerability scanner that checks for weaknesses in large language models.
The Python Risk Identification Tool for generative AI (PyRIT) helps identify risks in generative AI systems.
A list of useful payloads and bypass for Web Application Security and Pentest/CTF.
A fictional airline challenge where users manipulate an AI chatbot to win a fictional airline ticket.
Protect AI focuses on securing machine learning and AI applications with various open-source tools.