LogoAISecKit
icon of LLM NeuralExec

LLM NeuralExec

Code to generate NeuralExecs for prompt injection attacks tailored for LLMs.

Introduction

LLM NeuralExec

The LLM NeuralExec project provides code and methodologies for generating Neural Execution Triggers (NeuralExecs) to explore and exploit prompt injection vulnerabilities in Large Language Models (LLMs). This repository is essential for researchers in the fields of AI security and adversarial input testing.

Key Features:

  • Generative Capability: Automatically generate Neural Execs with customizable triggers based on various pretrained LLMs.
  • Evaluation Tools: Includes Jupyter notebooks for evaluating prompt injections and plotting training logs.
  • Pre-computation: Contains pre-computed Neural Execs for different LLMs which can be used for immediate experimentation.

Benefits:

  • Security Research: Aids in understanding and mitigating prompt injection attacks.
  • Reusable Code: Easily adaptable scripts and configuration files for various models and settings.
  • Open Source: Freely accessible code contributes to collaborative research and development in AI security.

Information

  • Publisher
    AISecKit
  • Websitegithub.com
  • Published date2025/04/27

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates