This paper presents RealitySummary, a reading assistance system that integrates a mixed reality (MR) interface with a large-scale language model (LLM) to support everyday reading. RealitySummary seamlessly integrates always-on camera access, OCR-based text extraction, and augmented spatial and visual responses with the LLM. The study, based on user feedback and reflective analysis, involved three iterations, each evaluated through a user study (N=12), a field deployment (N=11), and a diary study (N=5). The experimental results highlight the unique advantages of combining AI and MR, including always-on implicit support, long-term temporal recording, minimal context switching, and spatial capabilities, demonstrating the potential of future LLM-MR interfaces that go beyond traditional screen-based interactions.