This paper proposes a large-scale language model (LLM)-based scheduler based on the ReAct framework to address high-performance computing (HPC) task scheduling problems. Existing heuristic-based methods (FCFS, SJF) and optimization techniques lack adaptability to dynamic workloads and cannot simultaneously optimize multiple objectives. The proposed LLM-based scheduler utilizes scratchpad memory to track scheduling history, improves decision-making through natural language feedback, and ensures feasibility and safety through a constraint enforcement module. Evaluations on various real-world HPC workload scenarios using OpenAI's O4-Mini and Anthropic's Claude 3.7 demonstrate that the LLM-based scheduler effectively balances multiple objectives and provides transparent inference through natural language tracking. It exhibits excellent constraint satisfaction and adapts to diverse workloads without domain-specific learning. However, the tradeoff between inference quality and computational overhead remains a challenge for real-time deployment. This paper is the first comprehensive study of the application of inferential LLM to HPC scheduling, demonstrating its potential for multi-objective optimization while highlighting the limitations of computational efficiency.