This paper proposes an uncertainty-aware, large-scale language model (LLM)-based framework for interpreting "Fedspeak," the distinctive language of the U.S. Federal Reserve (Fed), and classifying monetary policy stance. To enrich the semantic and contextual representations of Fedspeak, we integrate domain-specific inference based on monetary policy communication mechanisms. Furthermore, we introduce a dynamic uncertainty decoding module that assesses the reliability of model predictions, thereby improving classification accuracy and model reliability. Experimental results demonstrate that the proposed framework achieves state-of-the-art performance in policy stance analysis and demonstrates a significant positive correlation between perceived uncertainty and model error rates, validating its effectiveness as a diagnostic signal for perceptual uncertainty. This provides valuable insights for financial forecasting, algorithmic trading, and data-driven policy analysis.