This paper points out the limitations of applying large-scale language models (LLMs) in the financial field, and proposes FinBERT2, a Chinese finance-specific BERT model, to solve this problem. Despite its high computational cost, LLM underperforms fine-tuned BERT in discriminative tasks such as financial sentiment analysis, relies heavily on the Retrieval Augmented Generation (RAG) method for providing special information in generative tasks, and is also deficient in other feature-based scenarios such as topic modeling. FinBERT2 is a bidirectional encoder model pre-trained on a high-quality financial-specific corpus of 32 billion tokens, and outperforms existing (Fin)BERT models and LLM in five financial classification tasks. In addition, Fin-Retrievers based on FinBERT2 outperform existing embedding models in financial retrieval tasks, and Fin-TopicModel enables excellent clustering and topic representation for financial titles. In conclusion, FinBERT2 suggests an effective way to utilize finance-specific models in the LLM era.