Explore prompt injection techniques in large language models (LLMs), providing examples to improve LLM security and robustness.
The most comprehensive prompt hacking course available, focusing on prompt engineering and security.
An open-source toolkit for monitoring Large Language Models (LLMs) with features like text quality and sentiment analysis.
Benchmark various LLM Structured Output frameworks on tasks like multi-label classification and named entity recognition.
Open-source tool for decompiling binary code into C using large language models.
Learn AI and LLMs from scratch using free resources.
The AI Red Team Platform.
A simple and modular tool to evaluate and red-team any LLM application.
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments.
Mureka is a comprehensive platform for AI models, tools, and security resources, catering to various analytical needs.
A comprehensive guide on LLM applications, covering LangChain, LlamaIndex, and HuggingGPT for developers.
AdalFlow is a library for building and auto-optimizing LLM applications.