This repository provides a benchmark for prompt Injection attacks and defenses.
An open-source toolkit for monitoring Large Language Models (LLMs) with features like text quality and sentiment analysis.
The Security Toolkit for LLM Interactions, ensuring safe and secure interactions with Large Language Models.
Benchmark various LLM Structured Output frameworks on tasks like multi-label classification and named entity recognition.
Code scanner to check for issues in prompts and LLM calls
Open-source tool for decompiling binary code into C using large language models.
A simple and modular tool to evaluate and red-team any LLM application.
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments.
A comprehensive guide on LLM applications, covering LangChain, LlamaIndex, and HuggingGPT for developers.
AdalFlow is a library for building and auto-optimizing LLM applications.
A comprehensive guide for fine-tuning and deploying open-source LLMs in Linux environments, tailored for beginners in China.
A large language model focused on social etiquette, covering prompt engineering, RAG, agent applications, and LLM fine-tuning tutorials.