Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

CogAtom: From Cognitive Atoms to Olympiad-level Mathematical Reasoning in Large Language Models

Created by
  • Haebom

Author

Zhuofan Chen, Jiyuan He, Yichi Zhang, Xing Hu, Haoxing Wen, Jun Bai, Wenge Rong

Outline

CogAtom is a novel problem generation framework for enhancing the mathematical reasoning capabilities of large-scale language models (LLMs). Unlike existing methods, CogAtom generates problems by selecting and recombining "cognitive atoms," basic inference units extracted from human-written solutions. A random walk algorithm that promotes diversity and a constraint-based recombination mechanism ensure logical consistency and structural validity, and the difficulty of the problem can be precisely adjusted by adjusting the number of cognitive atoms. Experimental results show that CogAtom outperforms existing methods in accuracy, inference depth, and diversity, generating problems approaching the AIME level of difficulty while exhibiting superior structural variation. The code is publicly available on GitHub.

Takeaways, Limitations

Takeaways:
A New Approach to Improving Mathematical Reasoning Skills in LLMs
Proving the feasibility of large-scale generation of high-quality mathematical problems
Precise control over the difficulty of the problem
Ensuring diversity of generated problems
Improving transparency of the problem generation process through a cognitive atom-based approach.
Limitations:
The possibility of subjectivity in the extraction and definition of cognitive atoms
Currently focused on generating AIME-level problems; further research is needed on generating higher-level problems.
Further verification of applicability to various types of mathematical problems is needed.
👍