This paper presents a comprehensive survey of sparse autoencoders (SAEs), which are emerging as a promising method for understanding the internal mechanisms of large-scale language models (LLMs). We comprehensively cover SAE's technical framework, feature description methods, performance evaluation methods, and practical applications, focusing on SAE's ability to decompose complex features of LLMs into interpretable components.