We begin by addressing the lack of a physical explanation for whether large predictive models simply mimic training data or generate genuine insights. This study reports a primitive form of intuition, which emerges as a metastable phase of learning that critically balances next-token prediction and future path entropy. The intuition mechanism is discovered through mind-tuning, a minimal principle that imposes maximum caliber on predictive models using a control temperature-like parameter $\lambda$. Random walk learning in deterministic mazes exhibits a rich phase diagram that includes imitation (low $\lambda$), rule-breaking hallucinations (high $\lambda$), and a vulnerable intermediate window where the model spontaneously discovers new goal-oriented strategies, exhibiting strong protocol dependence (hysteresis) and multistability. These findings are captured in an effective low-dimensional theory, framing intuition as a property emerging from a critical balance between present memory and future curiosity.