This paper explores the potential and limitations of applying the successful large-scale language model (LLM) paradigm in natural language processing (NLP) to modeling biological languages (proteins, RNA, DNA). By reviewing previous studies that apply the autoregressive generative paradigm and evaluation metrics used in NLP to modeling biological sequences, we highlight the differences in the inherent structural correlations between natural and biological languages. In this paper, we consider the three-dimensional structure of biomolecules as the semantic content of sentences and emphasize the importance of structural evaluation by considering the strong correlations between residues or bases, and show the potential application of the autoregressive paradigm to modeling biological languages. The relevant code can be found in github.com/zjuKeLiu/RiFold.