LogoAISecKit
icon of LLM-eval-survey

LLM-eval-survey

The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".

Introduction

LLM-eval-survey

The LLM-eval-survey is the official GitHub repository for the survey paper titled "A Survey on Evaluation of Large Language Models". This repository serves as a comprehensive resource for evaluating large language models (LLMs) across various tasks and domains.

Key Features:
  • Comprehensive Evaluation: The repository organizes papers and resources related to the evaluation of LLMs, covering a wide range of tasks including natural language processing, reasoning, and more.
  • Contribution Welcome: Users are encouraged to contribute by suggesting new benchmarks or improvements, with proper acknowledgment in the paper.
  • Latest Updates: The repository is regularly updated with the latest research and findings in the field of LLM evaluation.
Benefits:
  • Research Resource: A valuable resource for researchers and practitioners in the field of AI and natural language processing.
  • Community Engagement: Encourages collaboration and community input to enhance the quality and comprehensiveness of the survey.
Highlights:
  • Organized papers according to various evaluation criteria such as robustness, ethics, biases, and trustworthiness.
  • Includes related projects and benchmarks for better summarization and evaluation of LLMs.

For more information, visit the GitHub repository.

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates