Leaderboard
MMToM-QA
a multimodal question-answering benchmark designed to evaluate AI models' cognitive ability to understand human beliefs and goals.
a multimodal question-answering benchmark designed to evaluate AI models' cognitive ability to understand human beliefs and goals.
a benchmark that evaluates large multimodal models (LMMs) on their ability to perform human-like mathematical reasoning.