This paper studies the sparse autoencoder (SAE) for the interpretability of large-scale language models (LLMs). We address the limitations of existing SAEs and propose AbsTopK, a new variant of SAE. While existing SAEs enforce nonnegativity, limiting their ability to represent bidirectional concepts, AbsTopK selects activations based on absolute values, enabling richer bidirectional concept representations. Experiments on various LLMs and tasks demonstrate the superiority of AbsTopK.