GitHub repository for optimization-based prompt injection attacks on LLMs as judges.
This repository contains the official code for the paper on prompt injection and parameterization.
The official implementation of InjecGuard, a tool for benchmarking and mitigating over-defense in prompt injection guardrail models.
Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"
A prompt injection game to collect data for robust ML research.
Every practical and proposed defense against prompt injection.
This paper discusses new methods for generating transferable adversarial attacks on aligned language models, improving LLM security.
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.
A platform for exploring and understanding various repositories related to software development and AI.
SimilarLabs helps you discover, compare and choose the best AI products and tools.