The official Python SDK for Model Context Protocol servers and clients.
A Datacenter Scale Distributed Inference Serving Framework for generative AI and reasoning models.
Hybrid thinking tool for efficient AI model integration in open-webui.
Promptfoo is a local tool for testing LLM applications with security evaluations and performance comparisons.
AutoAudit is a large language model (LLM) designed for enhancing cybersecurity through advanced AI-driven threat detection and response.
A curated list of tools, datasets, demos, and papers for evaluating large language models (LLMs).
Sample notebooks and prompts for evaluating large language models (LLMs) and generative AI.
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.