
Finetune ALL LLMs with ALL Adapeters on ALL Platforms!

A curated list of awesome security tools, experimental cases, and interesting things related to LLM or GPT.

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

A repository for practical notes on building applications using LLM.

A tutorial project for beginners on large model application development, integrating practical skills and theoretical knowledge.

Repository accompanying a paper on Red-Teaming for Large Language Models (LLMs).

Framework for testing vulnerabilities of large language models (LLM).

Framework for testing vulnerabilities of large language models (LLM).

A Agentic LLM CTF to test prompt injection attacks and preventions.

Code to generate NeuralExecs for prompt injection attacks tailored for LLMs.

A repository for benchmarking prompt injection attacks against AI models like GPT-4 and Gemini.

Guard your LangChain applications against prompt injection with Lakera ChainGuard.