Category
Explore by categories

AI ModelsModel Backdoor DefenseSecurity Research
Virtual Prompt Injection
Details
Unofficial implementation of backdooring instruction-tuned LLMs using virtual prompt injection.

Model Backdoor DefenseDevSecOps ToolsAI Security Monitoring
Protect AI
Details
Protect AI focuses on securing machine learning and AI applications with various open-source tools.

Model Backdoor DefenseAI Security MonitoringPrompt Injection Defense
llm-security-prompt-injection
Details
This project investigates the security of large language models by classifying input prompts to discover malicious ones.

Model Backdoor DefenseAI Security MonitoringPrompt Injection Defense
PromptSafe
Details
Save your precious prompt from leaking with minimal cost.

AI ModelsAI Application PlatformsModel Backdoor Defense
Adversarial Attack Injection Prompt
Details
A GitHub repository for developing adversarial attack techniques using injection prompts.

Model Backdoor DefenseAI Security MonitoringJailbreak Prevention
Awesome-Jailbreak-on-LLMs
Details
A collection of state-of-the-art jailbreak methods for LLMs, including papers, codes, datasets, and analyses.