Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.
This repository hosts research aimed at addressing the challenges posed by prompt injection attacks on large language models (LLMs). These attacks involve manipulating the input prompts to generate undesired outputs. In a landscape where AI systems are widely deployed, ensuring their robustness and security has become paramount.
secret.py
, and the results can be viewed by executing application_test.py
.