Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
This project investigates the security of large language models by classifying prompts to discover malicious injections.
This GitHub project focuses on investigating the security of large language models (LLMs) with a primary emphasis on prompt injection attacks. The study involves: