Category
Explore by categories

JudgeDeceiver
GitHub repository for optimization-based prompt injection attacks on LLMs as judges.

Fixed Input Parameterization
This repository contains the official code for the paper on prompt injection and parameterization.

InjecGuard
The official implementation of InjecGuard, a tool for benchmarking and mitigating over-defense in prompt injection guardrail models.

SecAlign
Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"

Tensor Trust
A prompt injection game to collect data for robust ML research.

prompt-injection-defenses
Every practical and proposed defense against prompt injection.

Universal and Transferable Adversarial Attacks on Aligned Language Models
This paper discusses new methods for generating transferable adversarial attacks on aligned language models, improving LLM security.

AIPromptJailbreakPractice
AIPromptJailbreakPractice is a GitHub repository documenting AI prompt jailbreak practices.

DeepWiki
A platform for exploring and understanding various repositories related to software development and AI.

SimilarLabs
SimilarLabs helps you discover, compare and choose the best AI products and tools.