This study investigates a technology-enhanced learning environment that facilitates the retrieval of relevant learning content for questions during self-directed learning, specifically exploring information retrieval methods utilizing large-scale language models (LLMs). Targeting undergraduate mathematics textbooks, we compare and analyze Augmented Search Generation (RAG) and GraphRAG, which utilizes knowledge graphs, for page-level question answering. Using a dataset of 477 question-answer pairs, we evaluate the retrieval accuracy and generated answer quality (F1 score) of RAG and GraphRAG. The results show that embedding-based RAG outperforms GraphRAG. Furthermore, attempts at re-ranking using LLMs resulted in performance degradation and hallucinations. This study highlights the potential and challenges of page-level retrieval systems in educational settings and highlights the need for sophisticated retrieval methods for building AI tutoring solutions.