Mathematical reasoning is a fundamental aspect of intelligence,encompassing a spectrum from basic arithmetic to intricate problem-solving.Recent investigations into the mathematical abilities of large language models(...Mathematical reasoning is a fundamental aspect of intelligence,encompassing a spectrum from basic arithmetic to intricate problem-solving.Recent investigations into the mathematical abilities of large language models(LLMs)have yielded inconsistent and incomplete assessments.In response,we introduce MathEval,a comprehensive benchmark designed to methodically evaluate the mathematical problem-solving proficiency of LLMs in various contexts,adaptation strategies,and evaluation metrics.MathEval consolidates 22 distinct datasets,encompassing a broad spectrum of mathematical disciplines,languages(including English and Chinese),and problem categories(ranging from arithmetic and competitive mathematics to higher mathematics),with varying degrees of difficulty from elementary to advanced.To address the complexity of mathematical reasoning outputs and adapt to diverse models and prompts,we employ GPT-4 as an automated pipeline for answer_extraction andcomparison.Additionally,we trained a publicly available DeepSeek-LLM-7B-Base model using GPT-4 results,enabling precise_answer validation without requiring GPT-4 access.To mitigate potential test data contamination and truly gauge progress,MathEval incorporates an annually refreshed set of problems from the latest Chinese National College Entrance Examination(Gaokao-2023,Gaokao-2024),,thereby benchmarking genuine advancements in mathematical problem solving skills.展开更多
基金supported in part by the National Key R&D Program of China(Grant No.2022YFC3303600)in part by the National Natural Science Foundation of China(Grant No.62477025)+1 种基金in part by the Key Laboratory of Smart Education of Guangdong Higher Education Institutes,Jinan University(Grant No.2022LSYS003)in part by Beijing Municipal Science and Technology Project(Grant No.Z241100001324011).
文摘Mathematical reasoning is a fundamental aspect of intelligence,encompassing a spectrum from basic arithmetic to intricate problem-solving.Recent investigations into the mathematical abilities of large language models(LLMs)have yielded inconsistent and incomplete assessments.In response,we introduce MathEval,a comprehensive benchmark designed to methodically evaluate the mathematical problem-solving proficiency of LLMs in various contexts,adaptation strategies,and evaluation metrics.MathEval consolidates 22 distinct datasets,encompassing a broad spectrum of mathematical disciplines,languages(including English and Chinese),and problem categories(ranging from arithmetic and competitive mathematics to higher mathematics),with varying degrees of difficulty from elementary to advanced.To address the complexity of mathematical reasoning outputs and adapt to diverse models and prompts,we employ GPT-4 as an automated pipeline for answer_extraction andcomparison.Additionally,we trained a publicly available DeepSeek-LLM-7B-Base model using GPT-4 results,enabling precise_answer validation without requiring GPT-4 access.To mitigate potential test data contamination and truly gauge progress,MathEval incorporates an annually refreshed set of problems from the latest Chinese National College Entrance Examination(Gaokao-2023,Gaokao-2024),,thereby benchmarking genuine advancements in mathematical problem solving skills.