Newsletter
Join the Community
Subscribe to our newsletter for the latest news and updates
A comprehensive survey on benchmarks for Multimodal Large Language Models (MLLMs).
This repository presents a detailed survey on the benchmarks of Multimodal Large Language Models (MLLMs), focusing on their performance across various applications such as visual question answering, visual perception, understanding, and reasoning. The survey reviews over 200 benchmarks and evaluations, categorized into key areas:
For more information, visit the GitHub repository.