This paper presents a full-page HDR dataset (FPHDR) and an automated HDR solution (AutoHDR) to address the Limitations problem in the field of historical document restoration (HDR). FPHDR consists of 1,633 real images and 6,543 synthetic images, including character and line-level location information and character annotations for various damage levels. AutoHDR mimics the restoration process of historians through a three-step approach: OCR-based damage localization, visual-linguistic contextual text prediction, and patch autoregressive appearance restoration. The modular architecture enables flexible human-machine collaboration to support intervention and optimization at each restoration step. Experimental results show that AutoHDR improves the OCR accuracy from 46.83% to 84.05% when processing severely damaged documents, and up to 94.25% through human-machine collaboration. This study contributes significantly to the advancement of automated historical document restoration and cultural heritage preservation. The model and dataset are publicly available on GitHub.