Skip to main content
Mythos

The Hierarchical Reasoning Model (HRM) is a recurrent neural architecture proposed to address limitations in reasoning exhibited by large language models. Unlike 📝Chain of Thought Reasoning, which externalize reasoning into sequences of linguistic steps, HRM conducts computation internally through two coupled recurrent modules: a high-level module for abstract planning and a low-level module for rapid, detailed operations. This design enables deep, multi-stage reasoning while maintaining stability and efficiency in training. With 27 million parameters and no pretraining, HRM demonstrates strong performance on benchmarks such as Sudoku-Extreme, large maze navigation, and the Abstraction and Reasoning Corpus. By incorporating mechanisms such as hierarchical convergence, one-step gradient approximation, and adaptive computational time, it achieves near-perfect accuracy on tasks that challenge much larger models, suggesting a pathway toward more general-purpose reasoning systems.

Contexts

Created with 💜 by One Inc | Copyright 2026